Feb 1 01:38:37 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Feb 1 01:38:37 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Feb 1 01:38:37 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Feb 1 01:38:37 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 1 01:38:37 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 1 01:38:37 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 1 01:38:37 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 1 01:38:37 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 1 01:38:37 localhost kernel: signal: max sigframe size: 1776 Feb 1 01:38:37 localhost kernel: BIOS-provided physical RAM map: Feb 1 01:38:37 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 1 01:38:37 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 1 01:38:37 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 1 01:38:37 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Feb 1 01:38:37 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Feb 1 01:38:37 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 1 01:38:37 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 1 01:38:37 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Feb 1 01:38:37 localhost kernel: NX (Execute Disable) protection: active Feb 1 01:38:37 localhost kernel: SMBIOS 2.8 present. Feb 1 01:38:37 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 1 01:38:37 localhost kernel: Hypervisor detected: KVM Feb 1 01:38:37 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 1 01:38:37 localhost kernel: kvm-clock: using sched offset of 2960669896 cycles Feb 1 01:38:37 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 1 01:38:37 localhost kernel: tsc: Detected 2799.998 MHz processor Feb 1 01:38:37 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Feb 1 01:38:37 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 1 01:38:37 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Feb 1 01:38:37 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Feb 1 01:38:37 localhost kernel: Using GB pages for direct mapping Feb 1 01:38:37 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Feb 1 01:38:37 localhost kernel: ACPI: Early table checksum verification disabled Feb 1 01:38:37 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 1 01:38:37 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 1 01:38:37 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 1 01:38:37 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 1 01:38:37 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Feb 1 01:38:37 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 1 01:38:37 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 1 01:38:37 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Feb 1 01:38:37 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Feb 1 01:38:37 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Feb 1 01:38:37 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Feb 1 01:38:37 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Feb 1 01:38:37 localhost kernel: No NUMA configuration found Feb 1 01:38:37 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Feb 1 01:38:37 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd3000-0x43fffdfff] Feb 1 01:38:37 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Feb 1 01:38:37 localhost kernel: Zone ranges: Feb 1 01:38:37 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 1 01:38:37 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 1 01:38:37 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Feb 1 01:38:37 localhost kernel: Device empty Feb 1 01:38:37 localhost kernel: Movable zone start for each node Feb 1 01:38:37 localhost kernel: Early memory node ranges Feb 1 01:38:37 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 1 01:38:37 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Feb 1 01:38:37 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Feb 1 01:38:37 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Feb 1 01:38:37 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 1 01:38:37 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 1 01:38:37 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Feb 1 01:38:37 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Feb 1 01:38:37 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 1 01:38:37 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 1 01:38:37 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 1 01:38:37 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 1 01:38:37 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 1 01:38:37 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 1 01:38:37 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 1 01:38:37 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 1 01:38:37 localhost kernel: TSC deadline timer available Feb 1 01:38:37 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Feb 1 01:38:37 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Feb 1 01:38:37 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Feb 1 01:38:37 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Feb 1 01:38:37 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Feb 1 01:38:37 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Feb 1 01:38:37 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Feb 1 01:38:37 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Feb 1 01:38:37 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Feb 1 01:38:37 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Feb 1 01:38:37 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Feb 1 01:38:37 localhost kernel: Booting paravirtualized kernel on KVM Feb 1 01:38:37 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 1 01:38:37 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Feb 1 01:38:37 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Feb 1 01:38:37 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Feb 1 01:38:37 localhost kernel: Fallback order for Node 0: 0 Feb 1 01:38:37 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Feb 1 01:38:37 localhost kernel: Policy zone: Normal Feb 1 01:38:37 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Feb 1 01:38:37 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Feb 1 01:38:37 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 1 01:38:37 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 1 01:38:37 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 1 01:38:37 localhost kernel: software IO TLB: area num 8. Feb 1 01:38:37 localhost kernel: Memory: 2826284K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741268K reserved, 0K cma-reserved) Feb 1 01:38:37 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Feb 1 01:38:37 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Feb 1 01:38:37 localhost kernel: ftrace: allocating 44803 entries in 176 pages Feb 1 01:38:37 localhost kernel: ftrace: allocated 176 pages with 3 groups Feb 1 01:38:37 localhost kernel: Dynamic Preempt: voluntary Feb 1 01:38:37 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Feb 1 01:38:37 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Feb 1 01:38:37 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Feb 1 01:38:37 localhost kernel: #011Rude variant of Tasks RCU enabled. Feb 1 01:38:37 localhost kernel: #011Tracing variant of Tasks RCU enabled. Feb 1 01:38:37 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 1 01:38:37 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Feb 1 01:38:37 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Feb 1 01:38:37 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 1 01:38:37 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Feb 1 01:38:37 localhost kernel: random: crng init done (trusting CPU's manufacturer) Feb 1 01:38:37 localhost kernel: Console: colour VGA+ 80x25 Feb 1 01:38:37 localhost kernel: printk: console [tty0] enabled Feb 1 01:38:37 localhost kernel: printk: console [ttyS0] enabled Feb 1 01:38:37 localhost kernel: ACPI: Core revision 20211217 Feb 1 01:38:37 localhost kernel: APIC: Switch to symmetric I/O mode setup Feb 1 01:38:37 localhost kernel: x2apic enabled Feb 1 01:38:37 localhost kernel: Switched APIC routing to physical x2apic. Feb 1 01:38:37 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 1 01:38:37 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Feb 1 01:38:37 localhost kernel: pid_max: default: 32768 minimum: 301 Feb 1 01:38:37 localhost kernel: LSM: Security Framework initializing Feb 1 01:38:37 localhost kernel: Yama: becoming mindful. Feb 1 01:38:37 localhost kernel: SELinux: Initializing. Feb 1 01:38:37 localhost kernel: LSM support for eBPF active Feb 1 01:38:37 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 1 01:38:37 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 1 01:38:37 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 1 01:38:37 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 1 01:38:37 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 1 01:38:37 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 1 01:38:37 localhost kernel: Spectre V2 : Mitigation: Retpolines Feb 1 01:38:37 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 1 01:38:37 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 1 01:38:37 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 1 01:38:37 localhost kernel: RETBleed: Mitigation: untrained return thunk Feb 1 01:38:37 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 1 01:38:37 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 1 01:38:37 localhost kernel: Freeing SMP alternatives memory: 36K Feb 1 01:38:37 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 1 01:38:37 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Feb 1 01:38:37 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Feb 1 01:38:37 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Feb 1 01:38:37 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Feb 1 01:38:37 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 1 01:38:37 localhost kernel: ... version: 0 Feb 1 01:38:37 localhost kernel: ... bit width: 48 Feb 1 01:38:37 localhost kernel: ... generic registers: 6 Feb 1 01:38:37 localhost kernel: ... value mask: 0000ffffffffffff Feb 1 01:38:37 localhost kernel: ... max period: 00007fffffffffff Feb 1 01:38:37 localhost kernel: ... fixed-purpose events: 0 Feb 1 01:38:37 localhost kernel: ... event mask: 000000000000003f Feb 1 01:38:37 localhost kernel: rcu: Hierarchical SRCU implementation. Feb 1 01:38:37 localhost kernel: rcu: #011Max phase no-delay instances is 400. Feb 1 01:38:37 localhost kernel: smp: Bringing up secondary CPUs ... Feb 1 01:38:37 localhost kernel: x86: Booting SMP configuration: Feb 1 01:38:37 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Feb 1 01:38:37 localhost kernel: smp: Brought up 1 node, 8 CPUs Feb 1 01:38:37 localhost kernel: smpboot: Max logical packages: 8 Feb 1 01:38:37 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Feb 1 01:38:37 localhost kernel: node 0 deferred pages initialised in 20ms Feb 1 01:38:37 localhost kernel: devtmpfs: initialized Feb 1 01:38:37 localhost kernel: x86/mm: Memory block size: 128MB Feb 1 01:38:37 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 1 01:38:37 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Feb 1 01:38:37 localhost kernel: pinctrl core: initialized pinctrl subsystem Feb 1 01:38:37 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 1 01:38:37 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Feb 1 01:38:37 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 1 01:38:37 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 1 01:38:37 localhost kernel: audit: initializing netlink subsys (disabled) Feb 1 01:38:37 localhost kernel: audit: type=2000 audit(1769927916.087:1): state=initialized audit_enabled=0 res=1 Feb 1 01:38:37 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Feb 1 01:38:37 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 1 01:38:37 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Feb 1 01:38:37 localhost kernel: cpuidle: using governor menu Feb 1 01:38:37 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Feb 1 01:38:37 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 1 01:38:37 localhost kernel: PCI: Using configuration type 1 for base access Feb 1 01:38:37 localhost kernel: PCI: Using configuration type 1 for extended access Feb 1 01:38:37 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 1 01:38:37 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Feb 1 01:38:37 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 1 01:38:37 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 1 01:38:37 localhost kernel: cryptd: max_cpu_qlen set to 1000 Feb 1 01:38:37 localhost kernel: ACPI: Added _OSI(Module Device) Feb 1 01:38:37 localhost kernel: ACPI: Added _OSI(Processor Device) Feb 1 01:38:37 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 1 01:38:37 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 1 01:38:37 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 1 01:38:37 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 1 01:38:37 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 1 01:38:37 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 1 01:38:37 localhost kernel: ACPI: Interpreter enabled Feb 1 01:38:37 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Feb 1 01:38:37 localhost kernel: ACPI: Using IOAPIC for interrupt routing Feb 1 01:38:37 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 1 01:38:37 localhost kernel: PCI: Using E820 reservations for host bridge windows Feb 1 01:38:37 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 1 01:38:37 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 1 01:38:37 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Feb 1 01:38:37 localhost kernel: acpiphp: Slot [3] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [4] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [5] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [6] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [7] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [8] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [9] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [10] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [11] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [12] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [13] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [14] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [15] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [16] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [17] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [18] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [19] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [20] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [21] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [22] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [23] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [24] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [25] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [26] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [27] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [28] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [29] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [30] registered Feb 1 01:38:37 localhost kernel: acpiphp: Slot [31] registered Feb 1 01:38:37 localhost kernel: PCI host bridge to bus 0000:00 Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 1 01:38:37 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 1 01:38:37 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 1 01:38:37 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 1 01:38:37 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Feb 1 01:38:37 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 1 01:38:37 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 1 01:38:37 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 1 01:38:37 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 1 01:38:37 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 1 01:38:37 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Feb 1 01:38:37 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 1 01:38:37 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 1 01:38:37 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 1 01:38:37 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 1 01:38:37 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 1 01:38:37 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 1 01:38:37 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 1 01:38:37 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 1 01:38:37 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 1 01:38:37 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 1 01:38:37 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 1 01:38:37 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 1 01:38:37 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 1 01:38:37 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 1 01:38:37 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 1 01:38:37 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 1 01:38:37 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 1 01:38:37 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 1 01:38:37 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 1 01:38:37 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 1 01:38:37 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 1 01:38:37 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 1 01:38:37 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Feb 1 01:38:37 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 1 01:38:37 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 1 01:38:37 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 1 01:38:37 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 1 01:38:37 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 1 01:38:37 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 1 01:38:37 localhost kernel: iommu: Default domain type: Translated Feb 1 01:38:37 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 1 01:38:37 localhost kernel: SCSI subsystem initialized Feb 1 01:38:37 localhost kernel: ACPI: bus type USB registered Feb 1 01:38:37 localhost kernel: usbcore: registered new interface driver usbfs Feb 1 01:38:37 localhost kernel: usbcore: registered new interface driver hub Feb 1 01:38:37 localhost kernel: usbcore: registered new device driver usb Feb 1 01:38:37 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Feb 1 01:38:37 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 1 01:38:37 localhost kernel: PTP clock support registered Feb 1 01:38:37 localhost kernel: EDAC MC: Ver: 3.0.0 Feb 1 01:38:37 localhost kernel: NetLabel: Initializing Feb 1 01:38:37 localhost kernel: NetLabel: domain hash size = 128 Feb 1 01:38:37 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Feb 1 01:38:37 localhost kernel: NetLabel: unlabeled traffic allowed by default Feb 1 01:38:37 localhost kernel: PCI: Using ACPI for IRQ routing Feb 1 01:38:37 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 1 01:38:37 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 1 01:38:37 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 1 01:38:37 localhost kernel: vgaarb: loaded Feb 1 01:38:37 localhost kernel: clocksource: Switched to clocksource kvm-clock Feb 1 01:38:37 localhost kernel: VFS: Disk quotas dquot_6.6.0 Feb 1 01:38:37 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 1 01:38:37 localhost kernel: pnp: PnP ACPI init Feb 1 01:38:37 localhost kernel: pnp: PnP ACPI: found 5 devices Feb 1 01:38:37 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 1 01:38:37 localhost kernel: NET: Registered PF_INET protocol family Feb 1 01:38:37 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 1 01:38:37 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Feb 1 01:38:37 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 1 01:38:37 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 1 01:38:37 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 1 01:38:37 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Feb 1 01:38:37 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Feb 1 01:38:37 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Feb 1 01:38:37 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Feb 1 01:38:37 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 1 01:38:37 localhost kernel: NET: Registered PF_XDP protocol family Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Feb 1 01:38:37 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Feb 1 01:38:37 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 1 01:38:37 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 1 01:38:37 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 1 01:38:37 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 27927 usecs Feb 1 01:38:37 localhost kernel: PCI: CLS 0 bytes, default 64 Feb 1 01:38:37 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 1 01:38:37 localhost kernel: Trying to unpack rootfs image as initramfs... Feb 1 01:38:37 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Feb 1 01:38:37 localhost kernel: ACPI: bus type thunderbolt registered Feb 1 01:38:37 localhost kernel: Initialise system trusted keyrings Feb 1 01:38:37 localhost kernel: Key type blacklist registered Feb 1 01:38:37 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Feb 1 01:38:37 localhost kernel: zbud: loaded Feb 1 01:38:37 localhost kernel: integrity: Platform Keyring initialized Feb 1 01:38:37 localhost kernel: NET: Registered PF_ALG protocol family Feb 1 01:38:37 localhost kernel: xor: automatically using best checksumming function avx Feb 1 01:38:37 localhost kernel: Key type asymmetric registered Feb 1 01:38:37 localhost kernel: Asymmetric key parser 'x509' registered Feb 1 01:38:37 localhost kernel: Running certificate verification selftests Feb 1 01:38:37 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Feb 1 01:38:37 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Feb 1 01:38:37 localhost kernel: io scheduler mq-deadline registered Feb 1 01:38:37 localhost kernel: io scheduler kyber registered Feb 1 01:38:37 localhost kernel: io scheduler bfq registered Feb 1 01:38:37 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Feb 1 01:38:37 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Feb 1 01:38:37 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Feb 1 01:38:37 localhost kernel: ACPI: button: Power Button [PWRF] Feb 1 01:38:37 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 1 01:38:37 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 1 01:38:37 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 1 01:38:37 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 1 01:38:37 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 1 01:38:37 localhost kernel: Non-volatile memory driver v1.3 Feb 1 01:38:37 localhost kernel: rdac: device handler registered Feb 1 01:38:37 localhost kernel: hp_sw: device handler registered Feb 1 01:38:37 localhost kernel: emc: device handler registered Feb 1 01:38:37 localhost kernel: alua: device handler registered Feb 1 01:38:37 localhost kernel: libphy: Fixed MDIO Bus: probed Feb 1 01:38:37 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 1 01:38:37 localhost kernel: ehci-pci: EHCI PCI platform driver Feb 1 01:38:37 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Feb 1 01:38:37 localhost kernel: ohci-pci: OHCI PCI platform driver Feb 1 01:38:37 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 1 01:38:37 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 1 01:38:37 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 1 01:38:37 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 1 01:38:37 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Feb 1 01:38:37 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Feb 1 01:38:37 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Feb 1 01:38:37 localhost kernel: usb usb1: Product: UHCI Host Controller Feb 1 01:38:37 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Feb 1 01:38:37 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Feb 1 01:38:37 localhost kernel: hub 1-0:1.0: USB hub found Feb 1 01:38:37 localhost kernel: hub 1-0:1.0: 2 ports detected Feb 1 01:38:37 localhost kernel: usbcore: registered new interface driver usbserial_generic Feb 1 01:38:37 localhost kernel: usbserial: USB Serial support registered for generic Feb 1 01:38:37 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 1 01:38:37 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 1 01:38:37 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 1 01:38:37 localhost kernel: mousedev: PS/2 mouse device common for all mice Feb 1 01:38:37 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 1 01:38:37 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 1 01:38:37 localhost kernel: rtc_cmos 00:04: registered as rtc0 Feb 1 01:38:37 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-01T06:38:36 UTC (1769927916) Feb 1 01:38:37 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Feb 1 01:38:37 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 1 01:38:37 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Feb 1 01:38:37 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Feb 1 01:38:37 localhost kernel: usbcore: registered new interface driver usbhid Feb 1 01:38:37 localhost kernel: usbhid: USB HID core driver Feb 1 01:38:37 localhost kernel: drop_monitor: Initializing network drop monitor service Feb 1 01:38:37 localhost kernel: Initializing XFRM netlink socket Feb 1 01:38:37 localhost kernel: NET: Registered PF_INET6 protocol family Feb 1 01:38:37 localhost kernel: Segment Routing with IPv6 Feb 1 01:38:37 localhost kernel: NET: Registered PF_PACKET protocol family Feb 1 01:38:37 localhost kernel: mpls_gso: MPLS GSO support Feb 1 01:38:37 localhost kernel: IPI shorthand broadcast: enabled Feb 1 01:38:37 localhost kernel: AVX2 version of gcm_enc/dec engaged. Feb 1 01:38:37 localhost kernel: AES CTR mode by8 optimization enabled Feb 1 01:38:37 localhost kernel: sched_clock: Marking stable (764870699, 178189124)->(1070359052, -127299229) Feb 1 01:38:37 localhost kernel: registered taskstats version 1 Feb 1 01:38:37 localhost kernel: Loading compiled-in X.509 certificates Feb 1 01:38:37 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Feb 1 01:38:37 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Feb 1 01:38:37 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Feb 1 01:38:37 localhost kernel: zswap: loaded using pool lzo/zbud Feb 1 01:38:37 localhost kernel: page_owner is disabled Feb 1 01:38:37 localhost kernel: Key type big_key registered Feb 1 01:38:37 localhost kernel: Freeing initrd memory: 74232K Feb 1 01:38:37 localhost kernel: Key type encrypted registered Feb 1 01:38:37 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Feb 1 01:38:37 localhost kernel: Loading compiled-in module X.509 certificates Feb 1 01:38:37 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Feb 1 01:38:37 localhost kernel: ima: Allocated hash algorithm: sha256 Feb 1 01:38:37 localhost kernel: ima: No architecture policies found Feb 1 01:38:37 localhost kernel: evm: Initialising EVM extended attributes: Feb 1 01:38:37 localhost kernel: evm: security.selinux Feb 1 01:38:37 localhost kernel: evm: security.SMACK64 (disabled) Feb 1 01:38:37 localhost kernel: evm: security.SMACK64EXEC (disabled) Feb 1 01:38:37 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Feb 1 01:38:37 localhost kernel: evm: security.SMACK64MMAP (disabled) Feb 1 01:38:37 localhost kernel: evm: security.apparmor (disabled) Feb 1 01:38:37 localhost kernel: evm: security.ima Feb 1 01:38:37 localhost kernel: evm: security.capability Feb 1 01:38:37 localhost kernel: evm: HMAC attrs: 0x1 Feb 1 01:38:37 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Feb 1 01:38:37 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Feb 1 01:38:37 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Feb 1 01:38:37 localhost kernel: usb 1-1: Product: QEMU USB Tablet Feb 1 01:38:37 localhost kernel: usb 1-1: Manufacturer: QEMU Feb 1 01:38:37 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Feb 1 01:38:37 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Feb 1 01:38:37 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Feb 1 01:38:37 localhost kernel: Freeing unused decrypted memory: 2036K Feb 1 01:38:37 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Feb 1 01:38:37 localhost kernel: Write protecting the kernel read-only data: 26624k Feb 1 01:38:37 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 1 01:38:37 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Feb 1 01:38:37 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Feb 1 01:38:37 localhost kernel: Run /init as init process Feb 1 01:38:37 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 1 01:38:37 localhost systemd[1]: Detected virtualization kvm. Feb 1 01:38:37 localhost systemd[1]: Detected architecture x86-64. Feb 1 01:38:37 localhost systemd[1]: Running in initrd. Feb 1 01:38:37 localhost systemd[1]: No hostname configured, using default hostname. Feb 1 01:38:37 localhost systemd[1]: Hostname set to . Feb 1 01:38:37 localhost systemd[1]: Initializing machine ID from VM UUID. Feb 1 01:38:37 localhost systemd[1]: Queued start job for default target Initrd Default Target. Feb 1 01:38:37 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 1 01:38:37 localhost systemd[1]: Reached target Local Encrypted Volumes. Feb 1 01:38:37 localhost systemd[1]: Reached target Initrd /usr File System. Feb 1 01:38:37 localhost systemd[1]: Reached target Local File Systems. Feb 1 01:38:37 localhost systemd[1]: Reached target Path Units. Feb 1 01:38:37 localhost systemd[1]: Reached target Slice Units. Feb 1 01:38:37 localhost systemd[1]: Reached target Swaps. Feb 1 01:38:37 localhost systemd[1]: Reached target Timer Units. Feb 1 01:38:37 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 1 01:38:37 localhost systemd[1]: Listening on Journal Socket (/dev/log). Feb 1 01:38:37 localhost systemd[1]: Listening on Journal Socket. Feb 1 01:38:37 localhost systemd[1]: Listening on udev Control Socket. Feb 1 01:38:37 localhost systemd[1]: Listening on udev Kernel Socket. Feb 1 01:38:37 localhost systemd[1]: Reached target Socket Units. Feb 1 01:38:37 localhost systemd[1]: Starting Create List of Static Device Nodes... Feb 1 01:38:37 localhost systemd[1]: Starting Journal Service... Feb 1 01:38:37 localhost systemd[1]: Starting Load Kernel Modules... Feb 1 01:38:37 localhost systemd[1]: Starting Create System Users... Feb 1 01:38:37 localhost systemd[1]: Starting Setup Virtual Console... Feb 1 01:38:37 localhost systemd[1]: Finished Create List of Static Device Nodes. Feb 1 01:38:37 localhost systemd[1]: Finished Load Kernel Modules. Feb 1 01:38:37 localhost systemd[1]: Starting Apply Kernel Variables... Feb 1 01:38:37 localhost systemd-journald[284]: Journal started Feb 1 01:38:37 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/b72fb79934724728b6e2ec98d2bbb61b) is 8.0M, max 314.7M, 306.7M free. Feb 1 01:38:37 localhost systemd-modules-load[285]: Module 'msr' is built in Feb 1 01:38:37 localhost systemd[1]: Started Journal Service. Feb 1 01:38:37 localhost systemd[1]: Finished Setup Virtual Console. Feb 1 01:38:37 localhost systemd[1]: Finished Apply Kernel Variables. Feb 1 01:38:37 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Feb 1 01:38:37 localhost systemd[1]: Starting dracut cmdline hook... Feb 1 01:38:37 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997. Feb 1 01:38:37 localhost systemd-sysusers[286]: Creating group 'users' with GID 100. Feb 1 01:38:37 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81. Feb 1 01:38:37 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Feb 1 01:38:37 localhost systemd[1]: Finished Create System Users. Feb 1 01:38:37 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 1 01:38:37 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 1 01:38:37 localhost dracut-cmdline[291]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Feb 1 01:38:37 localhost dracut-cmdline[291]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Feb 1 01:38:37 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Feb 1 01:38:37 localhost systemd[1]: Finished Create Volatile Files and Directories. Feb 1 01:38:37 localhost systemd[1]: Finished dracut cmdline hook. Feb 1 01:38:37 localhost systemd[1]: Starting dracut pre-udev hook... Feb 1 01:38:37 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 1 01:38:37 localhost kernel: device-mapper: uevent: version 1.0.3 Feb 1 01:38:37 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Feb 1 01:38:37 localhost kernel: RPC: Registered named UNIX socket transport module. Feb 1 01:38:37 localhost kernel: RPC: Registered udp transport module. Feb 1 01:38:37 localhost kernel: RPC: Registered tcp transport module. Feb 1 01:38:37 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 1 01:38:37 localhost rpc.statd[408]: Version 2.5.4 starting Feb 1 01:38:37 localhost rpc.statd[408]: Initializing NSM state Feb 1 01:38:37 localhost rpc.idmapd[413]: Setting log level to 0 Feb 1 01:38:37 localhost systemd[1]: Finished dracut pre-udev hook. Feb 1 01:38:37 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 1 01:38:37 localhost systemd-udevd[426]: Using default interface naming scheme 'rhel-9.0'. Feb 1 01:38:37 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 1 01:38:37 localhost systemd[1]: Starting dracut pre-trigger hook... Feb 1 01:38:37 localhost systemd[1]: Finished dracut pre-trigger hook. Feb 1 01:38:37 localhost systemd[1]: Starting Coldplug All udev Devices... Feb 1 01:38:37 localhost systemd[1]: Finished Coldplug All udev Devices. Feb 1 01:38:37 localhost systemd[1]: Reached target System Initialization. Feb 1 01:38:37 localhost systemd[1]: Reached target Basic System. Feb 1 01:38:37 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Feb 1 01:38:37 localhost systemd[1]: Reached target Network. Feb 1 01:38:37 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Feb 1 01:38:37 localhost systemd[1]: Starting dracut initqueue hook... Feb 1 01:38:37 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Feb 1 01:38:37 localhost kernel: scsi host0: ata_piix Feb 1 01:38:37 localhost kernel: scsi host1: ata_piix Feb 1 01:38:37 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 1 01:38:37 localhost kernel: GPT:20971519 != 838860799 Feb 1 01:38:37 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Feb 1 01:38:37 localhost kernel: GPT:20971519 != 838860799 Feb 1 01:38:37 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Feb 1 01:38:37 localhost kernel: vda: vda1 vda2 vda3 vda4 Feb 1 01:38:37 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Feb 1 01:38:37 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Feb 1 01:38:37 localhost systemd-udevd[443]: Network interface NamePolicy= disabled on kernel command line. Feb 1 01:38:37 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Feb 1 01:38:37 localhost systemd[1]: Reached target Initrd Root Device. Feb 1 01:38:37 localhost kernel: ata1: found unknown device (class 0) Feb 1 01:38:37 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 1 01:38:37 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 1 01:38:37 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Feb 1 01:38:37 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 1 01:38:37 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 1 01:38:38 localhost systemd[1]: Finished dracut initqueue hook. Feb 1 01:38:38 localhost systemd[1]: Reached target Preparation for Remote File Systems. Feb 1 01:38:38 localhost systemd[1]: Reached target Remote Encrypted Volumes. Feb 1 01:38:38 localhost systemd[1]: Reached target Remote File Systems. Feb 1 01:38:38 localhost systemd[1]: Starting dracut pre-mount hook... Feb 1 01:38:38 localhost systemd[1]: Finished dracut pre-mount hook. Feb 1 01:38:38 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Feb 1 01:38:38 localhost systemd-fsck[513]: /usr/sbin/fsck.xfs: XFS file system. Feb 1 01:38:38 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Feb 1 01:38:38 localhost systemd[1]: Mounting /sysroot... Feb 1 01:38:38 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Feb 1 01:38:38 localhost kernel: XFS (vda4): Mounting V5 Filesystem Feb 1 01:38:38 localhost kernel: XFS (vda4): Ending clean mount Feb 1 01:38:38 localhost systemd[1]: Mounted /sysroot. Feb 1 01:38:38 localhost systemd[1]: Reached target Initrd Root File System. Feb 1 01:38:38 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Feb 1 01:38:38 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Feb 1 01:38:38 localhost systemd[1]: Reached target Initrd File Systems. Feb 1 01:38:38 localhost systemd[1]: Reached target Initrd Default Target. Feb 1 01:38:38 localhost systemd[1]: Starting dracut mount hook... Feb 1 01:38:38 localhost systemd[1]: Finished dracut mount hook. Feb 1 01:38:38 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 1 01:38:38 localhost rpc.idmapd[413]: exiting on signal 15 Feb 1 01:38:38 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Feb 1 01:38:38 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 1 01:38:38 localhost systemd[1]: Stopped target Network. Feb 1 01:38:38 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Feb 1 01:38:38 localhost systemd[1]: Stopped target Timer Units. Feb 1 01:38:38 localhost systemd[1]: dbus.socket: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Feb 1 01:38:38 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 1 01:38:38 localhost systemd[1]: Stopped target Initrd Default Target. Feb 1 01:38:38 localhost systemd[1]: Stopped target Basic System. Feb 1 01:38:38 localhost systemd[1]: Stopped target Initrd Root Device. Feb 1 01:38:38 localhost systemd[1]: Stopped target Initrd /usr File System. Feb 1 01:38:38 localhost systemd[1]: Stopped target Path Units. Feb 1 01:38:38 localhost systemd[1]: Stopped target Remote File Systems. Feb 1 01:38:38 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Feb 1 01:38:38 localhost systemd[1]: Stopped target Slice Units. Feb 1 01:38:38 localhost systemd[1]: Stopped target Socket Units. Feb 1 01:38:38 localhost systemd[1]: Stopped target System Initialization. Feb 1 01:38:38 localhost systemd[1]: Stopped target Local File Systems. Feb 1 01:38:38 localhost systemd[1]: Stopped target Swaps. Feb 1 01:38:38 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped dracut mount hook. Feb 1 01:38:38 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped dracut pre-mount hook. Feb 1 01:38:38 localhost systemd[1]: Stopped target Local Encrypted Volumes. Feb 1 01:38:38 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 1 01:38:38 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped dracut initqueue hook. Feb 1 01:38:38 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 1 01:38:38 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Load Kernel Modules. Feb 1 01:38:38 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Create Volatile Files and Directories. Feb 1 01:38:38 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Coldplug All udev Devices. Feb 1 01:38:38 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped dracut pre-trigger hook. Feb 1 01:38:38 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 1 01:38:38 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Setup Virtual Console. Feb 1 01:38:38 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 1 01:38:38 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Feb 1 01:38:38 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Closed udev Control Socket. Feb 1 01:38:38 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Closed udev Kernel Socket. Feb 1 01:38:38 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped dracut pre-udev hook. Feb 1 01:38:38 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped dracut cmdline hook. Feb 1 01:38:38 localhost systemd[1]: Starting Cleanup udev Database... Feb 1 01:38:38 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 1 01:38:38 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Create List of Static Device Nodes. Feb 1 01:38:38 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Stopped Create System Users. Feb 1 01:38:38 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 1 01:38:38 localhost systemd[1]: Finished Cleanup udev Database. Feb 1 01:38:38 localhost systemd[1]: Reached target Switch Root. Feb 1 01:38:38 localhost systemd[1]: Starting Switch Root... Feb 1 01:38:38 localhost systemd[1]: Switching root. Feb 1 01:38:38 localhost systemd-journald[284]: Journal stopped Feb 1 01:38:39 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd). Feb 1 01:38:39 localhost kernel: audit: type=1404 audit(1769927918.797:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Feb 1 01:38:39 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 01:38:39 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 01:38:39 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 01:38:39 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 01:38:39 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 01:38:39 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 01:38:39 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 01:38:39 localhost kernel: audit: type=1403 audit(1769927918.957:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 1 01:38:39 localhost systemd[1]: Successfully loaded SELinux policy in 163.182ms. Feb 1 01:38:39 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.512ms. Feb 1 01:38:39 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 1 01:38:39 localhost systemd[1]: Detected virtualization kvm. Feb 1 01:38:39 localhost systemd[1]: Detected architecture x86-64. Feb 1 01:38:39 localhost systemd-rc-local-generator[583]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 01:38:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 01:38:39 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 1 01:38:39 localhost systemd[1]: Stopped Switch Root. Feb 1 01:38:39 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 1 01:38:39 localhost systemd[1]: Created slice Slice /system/getty. Feb 1 01:38:39 localhost systemd[1]: Created slice Slice /system/modprobe. Feb 1 01:38:39 localhost systemd[1]: Created slice Slice /system/serial-getty. Feb 1 01:38:39 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Feb 1 01:38:39 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Feb 1 01:38:39 localhost systemd[1]: Created slice User and Session Slice. Feb 1 01:38:39 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 1 01:38:39 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Feb 1 01:38:39 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Feb 1 01:38:39 localhost systemd[1]: Reached target Local Encrypted Volumes. Feb 1 01:38:39 localhost systemd[1]: Stopped target Switch Root. Feb 1 01:38:39 localhost systemd[1]: Stopped target Initrd File Systems. Feb 1 01:38:39 localhost systemd[1]: Stopped target Initrd Root File System. Feb 1 01:38:39 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Feb 1 01:38:39 localhost systemd[1]: Reached target Path Units. Feb 1 01:38:39 localhost systemd[1]: Reached target rpc_pipefs.target. Feb 1 01:38:39 localhost systemd[1]: Reached target Slice Units. Feb 1 01:38:39 localhost systemd[1]: Reached target Swaps. Feb 1 01:38:39 localhost systemd[1]: Reached target Local Verity Protected Volumes. Feb 1 01:38:39 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Feb 1 01:38:39 localhost systemd[1]: Reached target RPC Port Mapper. Feb 1 01:38:39 localhost systemd[1]: Listening on Process Core Dump Socket. Feb 1 01:38:39 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Feb 1 01:38:39 localhost systemd[1]: Listening on udev Control Socket. Feb 1 01:38:39 localhost systemd[1]: Listening on udev Kernel Socket. Feb 1 01:38:39 localhost systemd[1]: Mounting Huge Pages File System... Feb 1 01:38:39 localhost systemd[1]: Mounting POSIX Message Queue File System... Feb 1 01:38:39 localhost systemd[1]: Mounting Kernel Debug File System... Feb 1 01:38:39 localhost systemd[1]: Mounting Kernel Trace File System... Feb 1 01:38:39 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 1 01:38:39 localhost systemd[1]: Starting Create List of Static Device Nodes... Feb 1 01:38:39 localhost systemd[1]: Starting Load Kernel Module configfs... Feb 1 01:38:39 localhost systemd[1]: Starting Load Kernel Module drm... Feb 1 01:38:39 localhost systemd[1]: Starting Load Kernel Module fuse... Feb 1 01:38:39 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Feb 1 01:38:39 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 1 01:38:39 localhost systemd[1]: Stopped File System Check on Root Device. Feb 1 01:38:39 localhost systemd[1]: Stopped Journal Service. Feb 1 01:38:39 localhost systemd[1]: Starting Journal Service... Feb 1 01:38:39 localhost systemd[1]: Starting Load Kernel Modules... Feb 1 01:38:39 localhost kernel: fuse: init (API version 7.36) Feb 1 01:38:39 localhost systemd[1]: Starting Generate network units from Kernel command line... Feb 1 01:38:39 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Feb 1 01:38:39 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Feb 1 01:38:39 localhost systemd[1]: Starting Coldplug All udev Devices... Feb 1 01:38:39 localhost systemd[1]: Mounted Huge Pages File System. Feb 1 01:38:39 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Feb 1 01:38:39 localhost systemd[1]: Mounted POSIX Message Queue File System. Feb 1 01:38:39 localhost systemd[1]: Mounted Kernel Debug File System. Feb 1 01:38:39 localhost systemd[1]: Mounted Kernel Trace File System. Feb 1 01:38:39 localhost systemd-journald[619]: Journal started Feb 1 01:38:39 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/00836dadc27b01f9fb0a211cca69e688) is 8.0M, max 314.7M, 306.7M free. Feb 1 01:38:39 localhost systemd[1]: Queued start job for default target Multi-User System. Feb 1 01:38:39 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Feb 1 01:38:39 localhost systemd-modules-load[620]: Module 'msr' is built in Feb 1 01:38:39 localhost systemd[1]: Started Journal Service. Feb 1 01:38:39 localhost systemd[1]: Finished Create List of Static Device Nodes. Feb 1 01:38:39 localhost kernel: ACPI: bus type drm_connector registered Feb 1 01:38:39 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 1 01:38:39 localhost systemd[1]: Finished Load Kernel Module configfs. Feb 1 01:38:39 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 1 01:38:39 localhost systemd[1]: Finished Load Kernel Module drm. Feb 1 01:38:39 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 1 01:38:39 localhost systemd[1]: Finished Load Kernel Module fuse. Feb 1 01:38:39 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Feb 1 01:38:39 localhost systemd[1]: Finished Load Kernel Modules. Feb 1 01:38:39 localhost systemd[1]: Finished Generate network units from Kernel command line. Feb 1 01:38:39 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Feb 1 01:38:39 localhost systemd[1]: Mounting FUSE Control File System... Feb 1 01:38:39 localhost systemd[1]: Mounting Kernel Configuration File System... Feb 1 01:38:39 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Feb 1 01:38:39 localhost systemd[1]: Starting Rebuild Hardware Database... Feb 1 01:38:39 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Feb 1 01:38:39 localhost systemd[1]: Starting Load/Save Random Seed... Feb 1 01:38:39 localhost systemd[1]: Starting Apply Kernel Variables... Feb 1 01:38:39 localhost systemd[1]: Starting Create System Users... Feb 1 01:38:39 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/00836dadc27b01f9fb0a211cca69e688) is 8.0M, max 314.7M, 306.7M free. Feb 1 01:38:39 localhost systemd-journald[619]: Received client request to flush runtime journal. Feb 1 01:38:39 localhost systemd[1]: Mounted FUSE Control File System. Feb 1 01:38:39 localhost systemd[1]: Mounted Kernel Configuration File System. Feb 1 01:38:39 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Feb 1 01:38:39 localhost systemd[1]: Finished Load/Save Random Seed. Feb 1 01:38:39 localhost systemd[1]: Finished Apply Kernel Variables. Feb 1 01:38:39 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Feb 1 01:38:39 localhost systemd-sysusers[632]: Creating group 'sgx' with GID 989. Feb 1 01:38:39 localhost systemd-sysusers[632]: Creating group 'systemd-oom' with GID 988. Feb 1 01:38:39 localhost systemd-sysusers[632]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Feb 1 01:38:39 localhost systemd[1]: Finished Create System Users. Feb 1 01:38:39 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 1 01:38:39 localhost systemd[1]: Finished Coldplug All udev Devices. Feb 1 01:38:39 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Feb 1 01:38:39 localhost systemd[1]: Reached target Preparation for Local File Systems. Feb 1 01:38:39 localhost systemd[1]: Set up automount EFI System Partition Automount. Feb 1 01:38:40 localhost systemd[1]: Finished Rebuild Hardware Database. Feb 1 01:38:40 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 1 01:38:40 localhost systemd-udevd[636]: Using default interface naming scheme 'rhel-9.0'. Feb 1 01:38:40 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 1 01:38:40 localhost systemd[1]: Starting Load Kernel Module configfs... Feb 1 01:38:40 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 1 01:38:40 localhost systemd[1]: Finished Load Kernel Module configfs. Feb 1 01:38:40 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Feb 1 01:38:40 localhost systemd-udevd[641]: Network interface NamePolicy= disabled on kernel command line. Feb 1 01:38:40 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Feb 1 01:38:40 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Feb 1 01:38:40 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 1 01:38:40 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Feb 1 01:38:40 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Feb 1 01:38:40 localhost systemd-fsck[681]: fsck.fat 4.2 (2021-01-31) Feb 1 01:38:40 localhost systemd-fsck[681]: /dev/vda2: 12 files, 1782/51145 clusters Feb 1 01:38:40 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Feb 1 01:38:40 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 1 01:38:40 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 1 01:38:40 localhost kernel: Console: switching to colour dummy device 80x25 Feb 1 01:38:40 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 1 01:38:40 localhost kernel: [drm] features: -context_init Feb 1 01:38:40 localhost kernel: [drm] number of scanouts: 1 Feb 1 01:38:40 localhost kernel: [drm] number of cap sets: 0 Feb 1 01:38:40 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Feb 1 01:38:40 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Feb 1 01:38:40 localhost kernel: Console: switching to colour frame buffer device 128x48 Feb 1 01:38:40 localhost kernel: SVM: TSC scaling supported Feb 1 01:38:40 localhost kernel: kvm: Nested Virtualization enabled Feb 1 01:38:40 localhost kernel: SVM: kvm: Nested Paging enabled Feb 1 01:38:40 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 1 01:38:40 localhost kernel: SVM: LBR virtualization supported Feb 1 01:38:40 localhost systemd[1]: Mounting /boot... Feb 1 01:38:40 localhost kernel: XFS (vda3): Mounting V5 Filesystem Feb 1 01:38:40 localhost kernel: XFS (vda3): Ending clean mount Feb 1 01:38:40 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Feb 1 01:38:40 localhost systemd[1]: Mounted /boot. Feb 1 01:38:40 localhost systemd[1]: Mounting /boot/efi... Feb 1 01:38:40 localhost systemd[1]: Mounted /boot/efi. Feb 1 01:38:40 localhost systemd[1]: Reached target Local File Systems. Feb 1 01:38:40 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Feb 1 01:38:40 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Feb 1 01:38:40 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 1 01:38:40 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 1 01:38:40 localhost systemd[1]: Starting Automatic Boot Loader Update... Feb 1 01:38:40 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Feb 1 01:38:40 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 1 01:38:40 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 718 (bootctl) Feb 1 01:38:40 localhost systemd[1]: Starting File System Check on /dev/vda2... Feb 1 01:38:40 localhost systemd[1]: Finished File System Check on /dev/vda2. Feb 1 01:38:40 localhost systemd[1]: Mounting EFI System Partition Automount... Feb 1 01:38:40 localhost systemd[1]: Mounted EFI System Partition Automount. Feb 1 01:38:40 localhost systemd[1]: Finished Automatic Boot Loader Update. Feb 1 01:38:40 localhost systemd[1]: Finished Create Volatile Files and Directories. Feb 1 01:38:40 localhost systemd[1]: Starting Security Auditing Service... Feb 1 01:38:40 localhost systemd[1]: Starting RPC Bind... Feb 1 01:38:40 localhost systemd[1]: Starting Rebuild Journal Catalog... Feb 1 01:38:40 localhost auditd[727]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Feb 1 01:38:40 localhost auditd[727]: Init complete, auditd 3.0.7 listening for events (startup state enable) Feb 1 01:38:40 localhost systemd[1]: Finished Rebuild Journal Catalog. Feb 1 01:38:40 localhost systemd[1]: Started RPC Bind. Feb 1 01:38:40 localhost augenrules[732]: /sbin/augenrules: No change Feb 1 01:38:40 localhost augenrules[742]: No rules Feb 1 01:38:40 localhost augenrules[742]: enabled 1 Feb 1 01:38:40 localhost augenrules[742]: failure 1 Feb 1 01:38:40 localhost augenrules[742]: pid 727 Feb 1 01:38:40 localhost augenrules[742]: rate_limit 0 Feb 1 01:38:40 localhost augenrules[742]: backlog_limit 8192 Feb 1 01:38:40 localhost augenrules[742]: lost 0 Feb 1 01:38:40 localhost augenrules[742]: backlog 4 Feb 1 01:38:40 localhost augenrules[742]: backlog_wait_time 60000 Feb 1 01:38:40 localhost augenrules[742]: backlog_wait_time_actual 0 Feb 1 01:38:40 localhost augenrules[742]: enabled 1 Feb 1 01:38:40 localhost augenrules[742]: failure 1 Feb 1 01:38:40 localhost augenrules[742]: pid 727 Feb 1 01:38:40 localhost augenrules[742]: rate_limit 0 Feb 1 01:38:40 localhost augenrules[742]: backlog_limit 8192 Feb 1 01:38:40 localhost augenrules[742]: lost 0 Feb 1 01:38:40 localhost augenrules[742]: backlog 4 Feb 1 01:38:40 localhost augenrules[742]: backlog_wait_time 60000 Feb 1 01:38:40 localhost augenrules[742]: backlog_wait_time_actual 0 Feb 1 01:38:40 localhost augenrules[742]: enabled 1 Feb 1 01:38:40 localhost augenrules[742]: failure 1 Feb 1 01:38:40 localhost augenrules[742]: pid 727 Feb 1 01:38:40 localhost augenrules[742]: rate_limit 0 Feb 1 01:38:40 localhost augenrules[742]: backlog_limit 8192 Feb 1 01:38:40 localhost augenrules[742]: lost 0 Feb 1 01:38:40 localhost augenrules[742]: backlog 4 Feb 1 01:38:40 localhost augenrules[742]: backlog_wait_time 60000 Feb 1 01:38:40 localhost augenrules[742]: backlog_wait_time_actual 0 Feb 1 01:38:40 localhost systemd[1]: Started Security Auditing Service. Feb 1 01:38:40 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Feb 1 01:38:40 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Feb 1 01:38:41 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Feb 1 01:38:41 localhost systemd[1]: Starting Update is Completed... Feb 1 01:38:41 localhost systemd[1]: Finished Update is Completed. Feb 1 01:38:41 localhost systemd[1]: Reached target System Initialization. Feb 1 01:38:41 localhost systemd[1]: Started dnf makecache --timer. Feb 1 01:38:41 localhost systemd[1]: Started Daily rotation of log files. Feb 1 01:38:41 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Feb 1 01:38:41 localhost systemd[1]: Reached target Timer Units. Feb 1 01:38:41 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 1 01:38:41 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Feb 1 01:38:41 localhost systemd[1]: Reached target Socket Units. Feb 1 01:38:41 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Feb 1 01:38:41 localhost systemd[1]: Starting D-Bus System Message Bus... Feb 1 01:38:41 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 1 01:38:41 localhost systemd[1]: Started D-Bus System Message Bus. Feb 1 01:38:41 localhost systemd[1]: Reached target Basic System. Feb 1 01:38:41 localhost journal[752]: Ready Feb 1 01:38:41 localhost systemd[1]: Starting NTP client/server... Feb 1 01:38:41 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Feb 1 01:38:41 localhost systemd[1]: Started irqbalance daemon. Feb 1 01:38:41 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Feb 1 01:38:41 localhost systemd[1]: Starting System Logging Service... Feb 1 01:38:41 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 01:38:41 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 01:38:41 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 01:38:41 localhost systemd[1]: Reached target sshd-keygen.target. Feb 1 01:38:41 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Feb 1 01:38:41 localhost systemd[1]: Reached target User and Group Name Lookups. Feb 1 01:38:41 localhost systemd[1]: Starting User Login Management... Feb 1 01:38:41 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Feb 1 01:38:41 localhost rsyslogd[760]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="760" x-info="https://www.rsyslog.com"] start Feb 1 01:38:41 localhost rsyslogd[760]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Feb 1 01:38:41 localhost systemd[1]: Started System Logging Service. Feb 1 01:38:41 localhost chronyd[767]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 1 01:38:41 localhost chronyd[767]: Using right/UTC timezone to obtain leap second data Feb 1 01:38:41 localhost chronyd[767]: Loaded seccomp filter (level 2) Feb 1 01:38:41 localhost systemd[1]: Started NTP client/server. Feb 1 01:38:41 localhost systemd-logind[761]: New seat seat0. Feb 1 01:38:41 localhost systemd-logind[761]: Watching system buttons on /dev/input/event0 (Power Button) Feb 1 01:38:41 localhost systemd-logind[761]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 1 01:38:41 localhost systemd[1]: Started User Login Management. Feb 1 01:38:41 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 01:38:41 localhost cloud-init[771]: Cloud-init v. 22.1-9.el9 running 'init-local' at Sun, 01 Feb 2026 06:38:41 +0000. Up 5.89 seconds. Feb 1 01:38:41 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpyreazcb8.mount: Deactivated successfully. Feb 1 01:38:42 localhost systemd[1]: Starting Hostname Service... Feb 1 01:38:42 localhost systemd[1]: Started Hostname Service. Feb 1 01:38:42 localhost systemd-hostnamed[785]: Hostname set to (static) Feb 1 01:38:42 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Feb 1 01:38:42 localhost systemd[1]: Reached target Preparation for Network. Feb 1 01:38:42 localhost systemd[1]: Starting Network Manager... Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.2537] NetworkManager (version 1.42.2-1.el9) is starting... (boot:f77db588-715c-4e22-a8c7-41daa1528c92) Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.2545] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Feb 1 01:38:42 localhost systemd[1]: Started Network Manager. Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.2587] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 1 01:38:42 localhost systemd[1]: Reached target Network. Feb 1 01:38:42 localhost systemd[1]: Starting Network Manager Wait Online... Feb 1 01:38:42 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Feb 1 01:38:42 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.2693] manager[0x55bdea407020]: monitoring kernel firmware directory '/lib/firmware'. Feb 1 01:38:42 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.2729] hostname: hostname: using hostnamed Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.2729] hostname: static hostname changed from (none) to "np0005604215.novalocal" Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.2740] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Feb 1 01:38:42 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Feb 1 01:38:42 localhost systemd[1]: Started GSSAPI Proxy Daemon. Feb 1 01:38:42 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 1 01:38:42 localhost systemd[1]: Reached target NFS client services. Feb 1 01:38:42 localhost systemd[1]: Reached target Preparation for Remote File Systems. Feb 1 01:38:42 localhost systemd[1]: Reached target Remote File Systems. Feb 1 01:38:42 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.2920] manager[0x55bdea407020]: rfkill: Wi-Fi hardware radio set enabled Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.2920] manager[0x55bdea407020]: rfkill: WWAN hardware radio set enabled Feb 1 01:38:42 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3006] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3007] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3021] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3021] manager: Networking is enabled by state file Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3060] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3060] settings: Loaded settings plugin: keyfile (internal) Feb 1 01:38:42 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3092] dhcp: init: Using DHCP client 'internal' Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3098] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3112] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3119] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3128] device (lo): Activation: starting connection 'lo' (d27cc6ff-3b23-4411-8524-3e0f36165c06) Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3139] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3145] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 1 01:38:42 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3182] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3185] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3187] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3189] device (eth0): carrier: link connected Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3194] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3201] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3209] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3213] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3214] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3215] manager: NetworkManager state is now CONNECTING Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3216] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3223] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3226] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3261] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3263] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.3268] device (lo): Activation: successful, device activated. Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.4366] dhcp4 (eth0): state changed new lease, address=38.102.83.164 Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.4376] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.4427] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.4449] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.4453] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.4460] manager: NetworkManager state is now CONNECTED_SITE Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.4467] device (eth0): Activation: successful, device activated. Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.4476] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 1 01:38:42 localhost NetworkManager[790]: [1769927922.4482] manager: startup complete Feb 1 01:38:42 localhost systemd[1]: Finished Network Manager Wait Online. Feb 1 01:38:42 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Feb 1 01:38:42 localhost systemd[1]: Starting Authorization Manager... Feb 1 01:38:42 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 1 01:38:42 localhost polkitd[1029]: Started polkitd version 0.117 Feb 1 01:38:42 localhost cloud-init[1036]: Cloud-init v. 22.1-9.el9 running 'init' at Sun, 01 Feb 2026 06:38:42 +0000. Up 6.88 seconds. Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | eth0 | True | 38.102.83.164 | 255.255.255.0 | global | fa:16:3e:d0:c8:c4 | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | eth0 | True | fe80::f816:3eff:fed0:c8c4/64 | . | link | fa:16:3e:d0:c8:c4 | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | lo | True | ::1/128 | . | host | . | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +-------+-------------+---------+-----------+-------+ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | Route | Destination | Gateway | Interface | Flags | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +-------+-------------+---------+-----------+-------+ Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: | 3 | multicast | :: | eth0 | U | Feb 1 01:38:42 localhost cloud-init[1036]: ci-info: +-------+-------------+---------+-----------+-------+ Feb 1 01:38:42 localhost systemd[1]: Started Authorization Manager. Feb 1 01:38:45 localhost cloud-init[1036]: Generating public/private rsa key pair. Feb 1 01:38:45 localhost cloud-init[1036]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Feb 1 01:38:45 localhost cloud-init[1036]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Feb 1 01:38:45 localhost cloud-init[1036]: The key fingerprint is: Feb 1 01:38:45 localhost cloud-init[1036]: SHA256:MAOwyvUX8xeemfJ1b1V1PqF7eCjCaNGXGuB7Hgwq9v4 root@np0005604215.novalocal Feb 1 01:38:45 localhost cloud-init[1036]: The key's randomart image is: Feb 1 01:38:45 localhost cloud-init[1036]: +---[RSA 3072]----+ Feb 1 01:38:45 localhost cloud-init[1036]: | ... . .o| Feb 1 01:38:45 localhost cloud-init[1036]: | . .. o . ..+| Feb 1 01:38:45 localhost cloud-init[1036]: | .. +* o + . .o| Feb 1 01:38:45 localhost cloud-init[1036]: |... . .+@ = = + o| Feb 1 01:38:45 localhost cloud-init[1036]: |.. o o =SX B = +.| Feb 1 01:38:45 localhost cloud-init[1036]: | . o o o * o + o| Feb 1 01:38:45 localhost cloud-init[1036]: | . . . o| Feb 1 01:38:45 localhost cloud-init[1036]: | . . | Feb 1 01:38:45 localhost cloud-init[1036]: | ..E | Feb 1 01:38:45 localhost cloud-init[1036]: +----[SHA256]-----+ Feb 1 01:38:45 localhost cloud-init[1036]: Generating public/private ecdsa key pair. Feb 1 01:38:45 localhost cloud-init[1036]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Feb 1 01:38:45 localhost cloud-init[1036]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Feb 1 01:38:45 localhost cloud-init[1036]: The key fingerprint is: Feb 1 01:38:45 localhost cloud-init[1036]: SHA256:4cdo6wbuJ+6z8De1ig5y8uOBmXOaH2o/A5y25AffQFA root@np0005604215.novalocal Feb 1 01:38:45 localhost cloud-init[1036]: The key's randomart image is: Feb 1 01:38:45 localhost cloud-init[1036]: +---[ECDSA 256]---+ Feb 1 01:38:45 localhost cloud-init[1036]: | .E | Feb 1 01:38:45 localhost cloud-init[1036]: | . | Feb 1 01:38:45 localhost cloud-init[1036]: | . . | Feb 1 01:38:45 localhost cloud-init[1036]: | . . + | Feb 1 01:38:45 localhost cloud-init[1036]: | . o S o | Feb 1 01:38:45 localhost cloud-init[1036]: | B+. .. o. | Feb 1 01:38:45 localhost cloud-init[1036]: | +B=B+ ... . | Feb 1 01:38:45 localhost cloud-init[1036]: | +@**=o= . | Feb 1 01:38:45 localhost cloud-init[1036]: | .+==OOBoo | Feb 1 01:38:45 localhost cloud-init[1036]: +----[SHA256]-----+ Feb 1 01:38:45 localhost cloud-init[1036]: Generating public/private ed25519 key pair. Feb 1 01:38:45 localhost cloud-init[1036]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Feb 1 01:38:45 localhost cloud-init[1036]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Feb 1 01:38:45 localhost cloud-init[1036]: The key fingerprint is: Feb 1 01:38:45 localhost cloud-init[1036]: SHA256:7V//jOvOIaArPbv+zf3jwaAVfF9gL7Joj+dLzfYzdak root@np0005604215.novalocal Feb 1 01:38:45 localhost cloud-init[1036]: The key's randomart image is: Feb 1 01:38:45 localhost cloud-init[1036]: +--[ED25519 256]--+ Feb 1 01:38:45 localhost cloud-init[1036]: | o | Feb 1 01:38:45 localhost cloud-init[1036]: | .. o | Feb 1 01:38:45 localhost cloud-init[1036]: | .o..o| Feb 1 01:38:45 localhost cloud-init[1036]: | . . oo.o| Feb 1 01:38:45 localhost cloud-init[1036]: | S = .o o| Feb 1 01:38:45 localhost cloud-init[1036]: | + +ooo.o| Feb 1 01:38:45 localhost cloud-init[1036]: | .. o.= Boo| Feb 1 01:38:45 localhost cloud-init[1036]: | . o. B E X.| Feb 1 01:38:45 localhost cloud-init[1036]: | o==. *+O+@| Feb 1 01:38:45 localhost cloud-init[1036]: +----[SHA256]-----+ Feb 1 01:38:46 localhost sm-notify[1133]: Version 2.5.4 starting Feb 1 01:38:46 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Feb 1 01:38:46 localhost systemd[1]: Reached target Cloud-config availability. Feb 1 01:38:46 localhost systemd[1]: Reached target Network is Online. Feb 1 01:38:46 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Feb 1 01:38:46 localhost sshd[1134]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Feb 1 01:38:46 localhost systemd[1]: Starting Crash recovery kernel arming... Feb 1 01:38:46 localhost systemd[1]: Starting Notify NFS peers of a restart... Feb 1 01:38:46 localhost systemd[1]: Starting OpenSSH server daemon... Feb 1 01:38:46 localhost systemd[1]: Starting Permit User Sessions... Feb 1 01:38:46 localhost systemd[1]: Started Notify NFS peers of a restart. Feb 1 01:38:46 localhost systemd[1]: Finished Permit User Sessions. Feb 1 01:38:46 localhost systemd[1]: Started Command Scheduler. Feb 1 01:38:46 localhost systemd[1]: Started Getty on tty1. Feb 1 01:38:46 localhost systemd[1]: Started Serial Getty on ttyS0. Feb 1 01:38:46 localhost systemd[1]: Reached target Login Prompts. Feb 1 01:38:46 localhost systemd[1]: Started OpenSSH server daemon. Feb 1 01:38:46 localhost systemd[1]: Reached target Multi-User System. Feb 1 01:38:46 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Feb 1 01:38:46 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 1 01:38:46 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Feb 1 01:38:46 localhost kdumpctl[1137]: kdump: No kdump initial ramdisk found. Feb 1 01:38:46 localhost kdumpctl[1137]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Feb 1 01:38:46 localhost cloud-init[1282]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Sun, 01 Feb 2026 06:38:46 +0000. Up 10.35 seconds. Feb 1 01:38:46 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Feb 1 01:38:46 localhost systemd[1]: Starting Execute cloud user/final scripts... Feb 1 01:38:46 localhost dracut[1419]: dracut-057-21.git20230214.el9 Feb 1 01:38:46 localhost cloud-init[1437]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Sun, 01 Feb 2026 06:38:46 +0000. Up 10.71 seconds. Feb 1 01:38:46 localhost dracut[1421]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Feb 1 01:38:46 localhost cloud-init[1457]: ############################################################# Feb 1 01:38:46 localhost sshd[1453]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost cloud-init[1461]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Feb 1 01:38:46 localhost cloud-init[1469]: 256 SHA256:4cdo6wbuJ+6z8De1ig5y8uOBmXOaH2o/A5y25AffQFA root@np0005604215.novalocal (ECDSA) Feb 1 01:38:46 localhost cloud-init[1475]: 256 SHA256:7V//jOvOIaArPbv+zf3jwaAVfF9gL7Joj+dLzfYzdak root@np0005604215.novalocal (ED25519) Feb 1 01:38:46 localhost cloud-init[1486]: 3072 SHA256:MAOwyvUX8xeemfJ1b1V1PqF7eCjCaNGXGuB7Hgwq9v4 root@np0005604215.novalocal (RSA) Feb 1 01:38:46 localhost cloud-init[1488]: -----END SSH HOST KEY FINGERPRINTS----- Feb 1 01:38:46 localhost sshd[1477]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost cloud-init[1494]: ############################################################# Feb 1 01:38:46 localhost sshd[1504]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost sshd[1529]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost cloud-init[1437]: Cloud-init v. 22.1-9.el9 finished at Sun, 01 Feb 2026 06:38:46 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 10.96 seconds Feb 1 01:38:46 localhost sshd[1549]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost sshd[1561]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost dracut[1421]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Feb 1 01:38:46 localhost sshd[1579]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost dracut[1421]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Feb 1 01:38:46 localhost systemd[1]: Reloading Network Manager... Feb 1 01:38:46 localhost sshd[1601]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost dracut[1421]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Feb 1 01:38:46 localhost NetworkManager[790]: [1769927926.9125] audit: op="reload" arg="0" pid=1598 uid=0 result="success" Feb 1 01:38:46 localhost NetworkManager[790]: [1769927926.9131] config: signal: SIGHUP (no changes from disk) Feb 1 01:38:46 localhost dracut[1421]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Feb 1 01:38:46 localhost systemd[1]: Reloaded Network Manager. Feb 1 01:38:46 localhost systemd[1]: Finished Execute cloud user/final scripts. Feb 1 01:38:46 localhost systemd[1]: Reached target Cloud-init target. Feb 1 01:38:46 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Feb 1 01:38:46 localhost sshd[1620]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:38:46 localhost dracut[1421]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Feb 1 01:38:46 localhost dracut[1421]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Feb 1 01:38:46 localhost dracut[1421]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Feb 1 01:38:47 localhost chronyd[767]: Selected source 209.227.173.244 (2.rhel.pool.ntp.org) Feb 1 01:38:47 localhost chronyd[767]: System clock TAI offset set to 37 seconds Feb 1 01:38:47 localhost dracut[1421]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Feb 1 01:38:47 localhost dracut[1421]: memstrack is not available Feb 1 01:38:47 localhost dracut[1421]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Feb 1 01:38:47 localhost dracut[1421]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Feb 1 01:38:47 localhost dracut[1421]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Feb 1 01:38:47 localhost dracut[1421]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Feb 1 01:38:47 localhost dracut[1421]: memstrack is not available Feb 1 01:38:47 localhost dracut[1421]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Feb 1 01:38:47 localhost dracut[1421]: *** Including module: systemd *** Feb 1 01:38:48 localhost dracut[1421]: *** Including module: systemd-initrd *** Feb 1 01:38:48 localhost dracut[1421]: *** Including module: i18n *** Feb 1 01:38:48 localhost dracut[1421]: No KEYMAP configured. Feb 1 01:38:48 localhost dracut[1421]: *** Including module: drm *** Feb 1 01:38:48 localhost dracut[1421]: *** Including module: prefixdevname *** Feb 1 01:38:48 localhost dracut[1421]: *** Including module: kernel-modules *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: kernel-modules-extra *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: qemu *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: fstab-sys *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: rootfs-block *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: terminfo *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: udev-rules *** Feb 1 01:38:49 localhost dracut[1421]: Skipping udev rule: 91-permissions.rules Feb 1 01:38:49 localhost dracut[1421]: Skipping udev rule: 80-drivers-modprobe.rules Feb 1 01:38:49 localhost dracut[1421]: *** Including module: virtiofs *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: dracut-systemd *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: usrmount *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: base *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: fs-lib *** Feb 1 01:38:49 localhost dracut[1421]: *** Including module: kdumpbase *** Feb 1 01:38:50 localhost dracut[1421]: *** Including module: microcode_ctl-fw_dir_override *** Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl module: mangling fw_dir Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: configuration "intel" is ignored Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: configuration "intel-06-2d-07" is ignored Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: configuration "intel-06-4e-03" is ignored Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: configuration "intel-06-4f-01" is ignored Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: configuration "intel-06-55-04" is ignored Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: configuration "intel-06-5e-03" is ignored Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: configuration "intel-06-8c-01" is ignored Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Feb 1 01:38:50 localhost dracut[1421]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Feb 1 01:38:50 localhost dracut[1421]: *** Including module: shutdown *** Feb 1 01:38:50 localhost dracut[1421]: *** Including module: squash *** Feb 1 01:38:50 localhost dracut[1421]: *** Including modules done *** Feb 1 01:38:50 localhost dracut[1421]: *** Installing kernel module dependencies *** Feb 1 01:38:50 localhost dracut[1421]: *** Installing kernel module dependencies done *** Feb 1 01:38:50 localhost dracut[1421]: *** Resolving executable dependencies *** Feb 1 01:38:52 localhost dracut[1421]: *** Resolving executable dependencies done *** Feb 1 01:38:52 localhost dracut[1421]: *** Hardlinking files *** Feb 1 01:38:52 localhost dracut[1421]: Mode: real Feb 1 01:38:52 localhost dracut[1421]: Files: 1099 Feb 1 01:38:52 localhost dracut[1421]: Linked: 3 files Feb 1 01:38:52 localhost dracut[1421]: Compared: 0 xattrs Feb 1 01:38:52 localhost dracut[1421]: Compared: 373 files Feb 1 01:38:52 localhost dracut[1421]: Saved: 61.04 KiB Feb 1 01:38:52 localhost dracut[1421]: Duration: 0.047188 seconds Feb 1 01:38:52 localhost dracut[1421]: *** Hardlinking files done *** Feb 1 01:38:52 localhost dracut[1421]: Could not find 'strip'. Not stripping the initramfs. Feb 1 01:38:52 localhost dracut[1421]: *** Generating early-microcode cpio image *** Feb 1 01:38:52 localhost dracut[1421]: *** Constructing AuthenticAMD.bin *** Feb 1 01:38:52 localhost dracut[1421]: *** Store current command line parameters *** Feb 1 01:38:52 localhost dracut[1421]: Stored kernel commandline: Feb 1 01:38:52 localhost dracut[1421]: No dracut internal kernel commandline stored in the initramfs Feb 1 01:38:52 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 1 01:38:52 localhost dracut[1421]: *** Install squash loader *** Feb 1 01:38:53 localhost dracut[1421]: *** Squashing the files inside the initramfs *** Feb 1 01:38:54 localhost dracut[1421]: *** Squashing the files inside the initramfs done *** Feb 1 01:38:54 localhost dracut[1421]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Feb 1 01:38:54 localhost dracut[1421]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Feb 1 01:38:54 localhost kdumpctl[1137]: kdump: kexec: loaded kdump kernel Feb 1 01:38:54 localhost kdumpctl[1137]: kdump: Starting kdump: [OK] Feb 1 01:38:54 localhost systemd[1]: Finished Crash recovery kernel arming. Feb 1 01:38:54 localhost systemd[1]: Startup finished in 1.156s (kernel) + 1.808s (initrd) + 15.965s (userspace) = 18.930s. Feb 1 01:39:07 localhost sshd[4175]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:39:07 localhost systemd[1]: Created slice User Slice of UID 1000. Feb 1 01:39:07 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Feb 1 01:39:07 localhost systemd-logind[761]: New session 1 of user zuul. Feb 1 01:39:07 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Feb 1 01:39:07 localhost systemd[1]: Starting User Manager for UID 1000... Feb 1 01:39:07 localhost systemd[4179]: Queued start job for default target Main User Target. Feb 1 01:39:07 localhost systemd[4179]: Created slice User Application Slice. Feb 1 01:39:07 localhost systemd[4179]: Started Mark boot as successful after the user session has run 2 minutes. Feb 1 01:39:07 localhost systemd[4179]: Started Daily Cleanup of User's Temporary Directories. Feb 1 01:39:07 localhost systemd[4179]: Reached target Paths. Feb 1 01:39:07 localhost systemd[4179]: Reached target Timers. Feb 1 01:39:07 localhost systemd[4179]: Starting D-Bus User Message Bus Socket... Feb 1 01:39:07 localhost systemd[4179]: Starting Create User's Volatile Files and Directories... Feb 1 01:39:07 localhost systemd[4179]: Listening on D-Bus User Message Bus Socket. Feb 1 01:39:07 localhost systemd[4179]: Finished Create User's Volatile Files and Directories. Feb 1 01:39:07 localhost systemd[4179]: Reached target Sockets. Feb 1 01:39:07 localhost systemd[4179]: Reached target Basic System. Feb 1 01:39:07 localhost systemd[4179]: Reached target Main User Target. Feb 1 01:39:07 localhost systemd[4179]: Startup finished in 109ms. Feb 1 01:39:07 localhost systemd[1]: Started User Manager for UID 1000. Feb 1 01:39:07 localhost systemd[1]: Started Session 1 of User zuul. Feb 1 01:39:07 localhost python3[4231]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 01:39:12 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 1 01:39:21 localhost python3[4252]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 01:39:27 localhost python3[4305]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 01:39:28 localhost python3[4335]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Feb 1 01:39:31 localhost python3[4351]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3PGk8eQ/HLnsyRzL8h5NfdCJluWZpaEZ6rXdDgbE0lw7uxHSFELY1SydQI6S9XXHDytudbXCsaTQesdKbdbGdHJj+Vg1gGMFqFoy4uSiBfcCXPrrCkLSus2YLNROASby08xEMRmyMtENrRZxLnhhab6au+uTTgjrzCQYER0PPqsmRyQSw+7T1mpjYdlu7KIQYLe0QTYZWg9qnRz3OQs3ed297w+gXNzQDadWOmWRrqVrG76umhtGZrmJCY+I0xUANvOtiQSFT89RlUBKK2jyA9a/TXr/TBu9+r7PJ/Y4ayoabn3z0m1V8WEY0u5V2/k3yqFndPYU//bBN0nlq90J+EMZPG7yU8fXbmL3KQQG9wWh4grfR0sRjBLd3o2eYVr2minX8gho1p+AosyJZ8aSpq86KLny3WC9JVc4/RqUWVvQ34IbOKg2Ef1+HJDFpRGvPN6pvTfUfBHSYnk3sX22e11wLjEi2Z+2kffa1GY++d6pvqQLop2x0re8+mhNTRaE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:39:32 localhost python3[4365]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:33 localhost python3[4424]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:39:34 localhost python3[4465]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769927973.4524443-394-277321625407689/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=fade19abcb7148119bae13ccbb795d6e_id_rsa follow=False checksum=1450e921e2d17379ea725f99be2eea1fb6e75a52 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:35 localhost python3[4538]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:39:35 localhost python3[4579]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769927975.1587512-494-52220837057847/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=fade19abcb7148119bae13ccbb795d6e_id_rsa.pub follow=False checksum=ad19e951a009809a91d74da158b058ce7df88458 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:37 localhost python3[4607]: ansible-ping Invoked with data=pong Feb 1 01:39:39 localhost python3[4621]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 01:39:43 localhost python3[4674]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Feb 1 01:39:45 localhost python3[4696]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:46 localhost python3[4710]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:46 localhost python3[4724]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:47 localhost python3[4738]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:47 localhost python3[4752]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:48 localhost python3[4766]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:50 localhost python3[4782]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:51 localhost python3[4830]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:39:52 localhost python3[4873]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769927991.6463-104-6990630972916/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:39:53 localhost chronyd[767]: Selected source 138.197.164.54 (2.rhel.pool.ntp.org) Feb 1 01:39:59 localhost python3[4901]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:00 localhost python3[4915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:00 localhost python3[4929]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:00 localhost python3[4943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:00 localhost python3[4957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:01 localhost python3[4971]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:01 localhost python3[4986]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:01 localhost python3[5000]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:01 localhost python3[5014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:02 localhost python3[5028]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:02 localhost python3[5042]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:02 localhost python3[5056]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:03 localhost python3[5070]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:03 localhost python3[5084]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:03 localhost python3[5098]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:03 localhost python3[5112]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:03 localhost python3[5126]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:04 localhost python3[5140]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:04 localhost python3[5154]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:04 localhost python3[5168]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:05 localhost python3[5182]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:05 localhost python3[5196]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:05 localhost python3[5210]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:05 localhost python3[5224]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:06 localhost python3[5238]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:06 localhost python3[5252]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:40:07 localhost python3[5268]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Feb 1 01:40:07 localhost systemd[1]: Starting Time & Date Service... Feb 1 01:40:07 localhost systemd[1]: Started Time & Date Service. Feb 1 01:40:07 localhost systemd-timedated[5270]: Changed time zone to 'UTC' (UTC). Feb 1 01:40:09 localhost python3[5289]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:40:10 localhost python3[5335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:40:10 localhost python3[5376]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769928009.977927-500-130948598554410/source _original_basename=tmp17zwbxb9 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:40:11 localhost python3[5436]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:40:11 localhost python3[5477]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769928011.4460287-592-3203137487946/source _original_basename=tmpsaz2xzgk follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:40:13 localhost python3[5539]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:40:14 localhost python3[5582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769928013.4437199-732-129222601682913/source _original_basename=tmpkb6xt0pw follow=False checksum=9313104c4584898a1afe992edc322b557e0f1f28 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:40:15 localhost python3[5610]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:40:15 localhost python3[5626]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:40:16 localhost python3[5676]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:40:17 localhost python3[5719]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769928016.4272835-860-95594803417146/source _original_basename=tmpwilrtpym follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:40:18 localhost python3[5750]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-3a26-ae0f-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:40:19 localhost python3[5768]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-3a26-ae0f-000000000024-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Feb 1 01:40:21 localhost python3[5786]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:40:37 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 1 01:40:39 localhost python3[5804]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:41:39 localhost systemd-logind[761]: Session 1 logged out. Waiting for processes to exit. Feb 1 01:41:40 localhost systemd[4179]: Starting Mark boot as successful... Feb 1 01:41:40 localhost systemd[4179]: Finished Mark boot as successful. Feb 1 01:42:42 localhost systemd[1]: Unmounting EFI System Partition Automount... Feb 1 01:42:42 localhost systemd[1]: efi.mount: Deactivated successfully. Feb 1 01:42:42 localhost systemd[1]: Unmounted EFI System Partition Automount. Feb 1 01:44:40 localhost systemd[4179]: Created slice User Background Tasks Slice. Feb 1 01:44:40 localhost systemd[4179]: Starting Cleanup of User's Temporary Files and Directories... Feb 1 01:44:40 localhost systemd[4179]: Finished Cleanup of User's Temporary Files and Directories. Feb 1 01:44:44 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Feb 1 01:44:44 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Feb 1 01:44:44 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Feb 1 01:44:44 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Feb 1 01:44:44 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Feb 1 01:44:44 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Feb 1 01:44:44 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Feb 1 01:44:44 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Feb 1 01:44:44 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Feb 1 01:44:44 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3073] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Feb 1 01:44:44 localhost systemd-udevd[5814]: Network interface NamePolicy= disabled on kernel command line. Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3224] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3262] settings: (eth1): created default wired connection 'Wired connection 1' Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3267] device (eth1): carrier: link connected Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3271] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3277] policy: auto-activating connection 'Wired connection 1' (ba1ceec1-c224-34b6-a0a4-ea1192c7597e) Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3284] device (eth1): Activation: starting connection 'Wired connection 1' (ba1ceec1-c224-34b6-a0a4-ea1192c7597e) Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3286] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3291] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3298] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 1 01:44:44 localhost NetworkManager[790]: [1769928284.3303] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Feb 1 01:44:45 localhost sshd[5817]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:44:45 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Feb 1 01:44:45 localhost systemd-logind[761]: New session 3 of user zuul. Feb 1 01:44:45 localhost systemd[1]: Started Session 3 of User zuul. Feb 1 01:44:45 localhost python3[5834]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-9afb-5883-000000000475-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:44:58 localhost python3[5884]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:44:59 localhost python3[5927]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769928298.4675562-537-185031792863134/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=c5ec26e43b1f8e7018b3bd3d9cbfeb38dd096269 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:44:59 localhost python3[5957]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 01:45:00 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Feb 1 01:45:00 localhost systemd[1]: Stopped Network Manager Wait Online. Feb 1 01:45:00 localhost systemd[1]: Stopping Network Manager Wait Online... Feb 1 01:45:00 localhost systemd[1]: Stopping Network Manager... Feb 1 01:45:00 localhost NetworkManager[790]: [1769928300.6829] caught SIGTERM, shutting down normally. Feb 1 01:45:00 localhost NetworkManager[790]: [1769928300.6923] dhcp4 (eth0): canceled DHCP transaction Feb 1 01:45:00 localhost NetworkManager[790]: [1769928300.6924] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Feb 1 01:45:00 localhost NetworkManager[790]: [1769928300.6924] dhcp4 (eth0): state changed no lease Feb 1 01:45:00 localhost NetworkManager[790]: [1769928300.6930] manager: NetworkManager state is now CONNECTING Feb 1 01:45:00 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 1 01:45:00 localhost NetworkManager[790]: [1769928300.7113] dhcp4 (eth1): canceled DHCP transaction Feb 1 01:45:00 localhost NetworkManager[790]: [1769928300.7115] dhcp4 (eth1): state changed no lease Feb 1 01:45:00 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 1 01:45:00 localhost NetworkManager[790]: [1769928300.7210] exiting (success) Feb 1 01:45:00 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Feb 1 01:45:00 localhost systemd[1]: Stopped Network Manager. Feb 1 01:45:00 localhost systemd[1]: NetworkManager.service: Consumed 2.843s CPU time. Feb 1 01:45:00 localhost systemd[1]: Starting Network Manager... Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.7756] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:f77db588-715c-4e22-a8c7-41daa1528c92) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.7759] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.7786] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 1 01:45:00 localhost systemd[1]: Started Network Manager. Feb 1 01:45:00 localhost systemd[1]: Starting Network Manager Wait Online... Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.7852] manager[0x556748592090]: monitoring kernel firmware directory '/lib/firmware'. Feb 1 01:45:00 localhost systemd[1]: Starting Hostname Service... Feb 1 01:45:00 localhost systemd[1]: Started Hostname Service. Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8736] hostname: hostname: using hostnamed Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8738] hostname: static hostname changed from (none) to "np0005604215.novalocal" Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8747] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8755] manager[0x556748592090]: rfkill: Wi-Fi hardware radio set enabled Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8756] manager[0x556748592090]: rfkill: WWAN hardware radio set enabled Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8797] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8799] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8801] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8803] manager: Networking is enabled by state file Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8812] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8814] settings: Loaded settings plugin: keyfile (internal) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8864] dhcp: init: Using DHCP client 'internal' Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8869] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8879] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8889] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8906] device (lo): Activation: starting connection 'lo' (d27cc6ff-3b23-4411-8524-3e0f36165c06) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8917] device (eth0): carrier: link connected Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8925] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8934] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8936] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8947] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8959] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8970] device (eth1): carrier: link connected Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8977] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8988] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ba1ceec1-c224-34b6-a0a4-ea1192c7597e) (indicated) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8990] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.8999] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9011] device (eth1): Activation: starting connection 'Wired connection 1' (ba1ceec1-c224-34b6-a0a4-ea1192c7597e) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9041] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9057] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9060] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9064] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9069] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9072] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9076] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9080] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9097] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9103] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9120] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9125] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9175] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9183] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9190] device (lo): Activation: successful, device activated. Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9201] dhcp4 (eth0): state changed new lease, address=38.102.83.164 Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9207] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9290] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9322] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9325] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9334] manager: NetworkManager state is now CONNECTED_SITE Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9338] device (eth0): Activation: successful, device activated. Feb 1 01:45:00 localhost NetworkManager[5972]: [1769928300.9350] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 1 01:45:01 localhost python3[6030]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-9afb-5883-000000000136-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:45:11 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 1 01:45:30 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 1 01:45:45 localhost NetworkManager[5972]: [1769928345.8218] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:45 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 1 01:45:45 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 1 01:45:45 localhost NetworkManager[5972]: [1769928345.8416] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:45 localhost NetworkManager[5972]: [1769928345.8420] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Feb 1 01:45:45 localhost NetworkManager[5972]: [1769928345.8442] device (eth1): Activation: successful, device activated. Feb 1 01:45:45 localhost NetworkManager[5972]: [1769928345.8453] manager: startup complete Feb 1 01:45:45 localhost systemd[1]: Finished Network Manager Wait Online. Feb 1 01:45:55 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 1 01:46:01 localhost systemd[1]: session-3.scope: Deactivated successfully. Feb 1 01:46:01 localhost systemd[1]: session-3.scope: Consumed 1.445s CPU time. Feb 1 01:46:01 localhost systemd-logind[761]: Session 3 logged out. Waiting for processes to exit. Feb 1 01:46:01 localhost systemd-logind[761]: Removed session 3. Feb 1 01:46:25 localhost sshd[6060]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:46:25 localhost systemd-logind[761]: New session 4 of user zuul. Feb 1 01:46:25 localhost systemd[1]: Started Session 4 of User zuul. Feb 1 01:46:26 localhost python3[6111]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:46:26 localhost python3[6154]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769928385.8376184-628-1439053193405/source _original_basename=tmplgaibzgz follow=False checksum=b662c6ad0fdede3f6b8f2737681b36760d23a74b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:46:29 localhost systemd[1]: session-4.scope: Deactivated successfully. Feb 1 01:46:29 localhost systemd-logind[761]: Session 4 logged out. Waiting for processes to exit. Feb 1 01:46:29 localhost systemd-logind[761]: Removed session 4. Feb 1 01:48:32 localhost chronyd[767]: Selected source 209.227.173.244 (2.rhel.pool.ntp.org) Feb 1 01:53:40 localhost systemd[1]: Starting Cleanup of Temporary Directories... Feb 1 01:53:40 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 1 01:53:40 localhost systemd[1]: Finished Cleanup of Temporary Directories. Feb 1 01:53:40 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 1 01:54:06 localhost sshd[6176]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:54:06 localhost systemd-logind[761]: New session 5 of user zuul. Feb 1 01:54:06 localhost systemd[1]: Started Session 5 of User zuul. Feb 1 01:54:06 localhost python3[6195]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-ef6d-83b0-0000000021a5-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:54:08 localhost python3[6214]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:54:08 localhost python3[6230]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:54:08 localhost python3[6246]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:54:08 localhost python3[6262]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:54:09 localhost python3[6278]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:54:10 localhost python3[6326]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:54:11 localhost python3[6369]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769928850.6871905-672-272484530281361/source _original_basename=tmpo44v3sz0 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:54:12 localhost python3[6399]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 01:54:12 localhost systemd[1]: Reloading. Feb 1 01:54:12 localhost systemd-rc-local-generator[6417]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 01:54:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 01:54:14 localhost python3[6445]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Feb 1 01:54:15 localhost python3[6461]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:54:15 localhost python3[6479]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:54:16 localhost python3[6497]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:54:16 localhost python3[6515]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:54:17 localhost python3[6532]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-ef6d-83b0-0000000021ac-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:54:18 localhost python3[6552]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 01:54:21 localhost systemd[1]: session-5.scope: Deactivated successfully. Feb 1 01:54:21 localhost systemd[1]: session-5.scope: Consumed 3.853s CPU time. Feb 1 01:54:21 localhost systemd-logind[761]: Session 5 logged out. Waiting for processes to exit. Feb 1 01:54:21 localhost systemd-logind[761]: Removed session 5. Feb 1 01:55:23 localhost sshd[6559]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:55:23 localhost systemd[1]: Started Session 6 of User zuul. Feb 1 01:55:23 localhost systemd-logind[761]: New session 6 of user zuul. Feb 1 01:55:24 localhost systemd[1]: Starting RHSM dbus service... Feb 1 01:55:24 localhost systemd[1]: Started RHSM dbus service. Feb 1 01:55:24 localhost rhsm-service[6583]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Feb 1 01:55:24 localhost rhsm-service[6583]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Feb 1 01:55:24 localhost rhsm-service[6583]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Feb 1 01:55:24 localhost rhsm-service[6583]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Feb 1 01:55:25 localhost rhsm-service[6583]: INFO [subscription_manager.managerlib:90] Consumer created: np0005604215.novalocal (228c691b-7b73-45e5-afcd-2aea3d003268) Feb 1 01:55:25 localhost subscription-manager[6583]: Registered system with identity: 228c691b-7b73-45e5-afcd-2aea3d003268 Feb 1 01:55:26 localhost rhsm-service[6583]: INFO [subscription_manager.entcertlib:131] certs updated: Feb 1 01:55:26 localhost rhsm-service[6583]: Total updates: 1 Feb 1 01:55:26 localhost rhsm-service[6583]: Found (local) serial# [] Feb 1 01:55:26 localhost rhsm-service[6583]: Expected (UEP) serial# [9104674843723702672] Feb 1 01:55:26 localhost rhsm-service[6583]: Added (new) Feb 1 01:55:26 localhost rhsm-service[6583]: [sn:9104674843723702672 ( Content Access,) @ /etc/pki/entitlement/9104674843723702672.pem] Feb 1 01:55:26 localhost rhsm-service[6583]: Deleted (rogue): Feb 1 01:55:26 localhost rhsm-service[6583]: Feb 1 01:55:26 localhost subscription-manager[6583]: Added subscription for 'Content Access' contract 'None' Feb 1 01:55:26 localhost subscription-manager[6583]: Added subscription for product ' Content Access' Feb 1 01:55:27 localhost rhsm-service[6583]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Feb 1 01:55:27 localhost rhsm-service[6583]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Feb 1 01:55:27 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 01:55:27 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 01:55:27 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 01:55:27 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 01:55:28 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 01:55:31 localhost python3[6674]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-1e29-84da-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 01:56:24 localhost systemd[1]: Starting dnf makecache... Feb 1 01:56:24 localhost python3[6694]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 01:56:25 localhost dnf[6693]: Updating Subscription Management repositories. Feb 1 01:56:26 localhost dnf[6693]: Failed determining last makecache time. Feb 1 01:56:27 localhost dnf[6693]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 35 MB/s | 14 MB 00:00 Feb 1 01:56:29 localhost dnf[6693]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 52 MB/s | 24 MB 00:00 Feb 1 01:56:34 localhost dnf[6693]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 70 MB/s | 42 MB 00:00 Feb 1 01:56:42 localhost dnf[6693]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 71 MB/s | 44 MB 00:00 Feb 1 01:56:47 localhost dnf[6693]: Last metadata expiration check: 0:00:02 ago on Sun Feb 1 06:56:42 2026. Feb 1 01:56:49 localhost dnf[6693]: Metadata cache created. Feb 1 01:56:49 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Feb 1 01:56:49 localhost systemd[1]: Finished dnf makecache. Feb 1 01:56:49 localhost systemd[1]: dnf-makecache.service: Consumed 23.169s CPU time. Feb 1 01:56:53 localhost setsebool[6769]: The virt_use_nfs policy boolean was changed to 1 by root Feb 1 01:56:53 localhost setsebool[6769]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Feb 1 01:57:01 localhost kernel: SELinux: Converting 406 SID table entries... Feb 1 01:57:01 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 01:57:01 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 01:57:01 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 01:57:01 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 01:57:01 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 01:57:01 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 01:57:01 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 01:57:14 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=3 res=1 Feb 1 01:57:14 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 01:57:14 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 01:57:14 localhost systemd[1]: Reloading. Feb 1 01:57:14 localhost systemd-rc-local-generator[7668]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 01:57:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 01:57:15 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 01:57:16 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 01:57:18 localhost systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck3624694533-merged.mount: Deactivated successfully. Feb 1 01:57:18 localhost podman[12826]: 2026-02-01 06:57:18.863616734 +0000 UTC m=+0.107065104 system refresh Feb 1 01:57:19 localhost systemd[4179]: Starting D-Bus User Message Bus... Feb 1 01:57:19 localhost dbus-broker-launch[14398]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Feb 1 01:57:19 localhost dbus-broker-launch[14398]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Feb 1 01:57:19 localhost systemd[4179]: Started D-Bus User Message Bus. Feb 1 01:57:19 localhost journal[14398]: Ready Feb 1 01:57:19 localhost systemd[4179]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Feb 1 01:57:19 localhost systemd[4179]: Created slice Slice /user. Feb 1 01:57:19 localhost systemd[4179]: podman-14248.scope: unit configures an IP firewall, but not running as root. Feb 1 01:57:19 localhost systemd[4179]: (This warning is only shown for the first unit using IP firewalling.) Feb 1 01:57:19 localhost systemd[4179]: Started podman-14248.scope. Feb 1 01:57:19 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 01:57:20 localhost systemd[4179]: Started podman-pause-ef96ed7a.scope. Feb 1 01:57:20 localhost systemd[1]: session-6.scope: Deactivated successfully. Feb 1 01:57:20 localhost systemd[1]: session-6.scope: Consumed 30.800s CPU time. Feb 1 01:57:20 localhost systemd-logind[761]: Session 6 logged out. Waiting for processes to exit. Feb 1 01:57:20 localhost systemd-logind[761]: Removed session 6. Feb 1 01:57:22 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 01:57:22 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 01:57:22 localhost systemd[1]: man-db-cache-update.service: Consumed 9.127s CPU time. Feb 1 01:57:22 localhost systemd[1]: run-rd67729f7e47a46dd9723e7c61dc6c308.service: Deactivated successfully. Feb 1 01:57:35 localhost sshd[18427]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:57:35 localhost sshd[18428]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:57:35 localhost sshd[18430]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:57:35 localhost sshd[18429]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:57:35 localhost sshd[18431]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:57:39 localhost sshd[18437]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:57:39 localhost systemd-logind[761]: New session 7 of user zuul. Feb 1 01:57:39 localhost systemd[1]: Started Session 7 of User zuul. Feb 1 01:57:39 localhost python3[18454]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGBAgohDMlstWoPOrziVyT3cq7c4YoWvTNp64hcksvV2VrQsWD6YrTZBaXHL0twL/A8QbTt5cQ7NNpUOjUCI5d4= zuul@np0005604206.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:57:40 localhost python3[18470]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGBAgohDMlstWoPOrziVyT3cq7c4YoWvTNp64hcksvV2VrQsWD6YrTZBaXHL0twL/A8QbTt5cQ7NNpUOjUCI5d4= zuul@np0005604206.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:57:42 localhost systemd[1]: session-7.scope: Deactivated successfully. Feb 1 01:57:42 localhost systemd-logind[761]: Session 7 logged out. Waiting for processes to exit. Feb 1 01:57:42 localhost systemd-logind[761]: Removed session 7. Feb 1 01:59:00 localhost sshd[18472]: main: sshd: ssh-rsa algorithm is disabled Feb 1 01:59:00 localhost systemd-logind[761]: New session 8 of user zuul. Feb 1 01:59:00 localhost systemd[1]: Started Session 8 of User zuul. Feb 1 01:59:00 localhost python3[18491]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3PGk8eQ/HLnsyRzL8h5NfdCJluWZpaEZ6rXdDgbE0lw7uxHSFELY1SydQI6S9XXHDytudbXCsaTQesdKbdbGdHJj+Vg1gGMFqFoy4uSiBfcCXPrrCkLSus2YLNROASby08xEMRmyMtENrRZxLnhhab6au+uTTgjrzCQYER0PPqsmRyQSw+7T1mpjYdlu7KIQYLe0QTYZWg9qnRz3OQs3ed297w+gXNzQDadWOmWRrqVrG76umhtGZrmJCY+I0xUANvOtiQSFT89RlUBKK2jyA9a/TXr/TBu9+r7PJ/Y4ayoabn3z0m1V8WEY0u5V2/k3yqFndPYU//bBN0nlq90J+EMZPG7yU8fXbmL3KQQG9wWh4grfR0sRjBLd3o2eYVr2minX8gho1p+AosyJZ8aSpq86KLny3WC9JVc4/RqUWVvQ34IbOKg2Ef1+HJDFpRGvPN6pvTfUfBHSYnk3sX22e11wLjEi2Z+2kffa1GY++d6pvqQLop2x0re8+mhNTRaE= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 01:59:02 localhost python3[18507]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604215.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 1 01:59:04 localhost python3[18557]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:59:04 localhost python3[18600]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769929144.0698407-136-80285915147853/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=fade19abcb7148119bae13ccbb795d6e_id_rsa follow=False checksum=1450e921e2d17379ea725f99be2eea1fb6e75a52 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:59:06 localhost python3[18662]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:59:06 localhost python3[18705]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769929145.7265825-226-260773617685089/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=fade19abcb7148119bae13ccbb795d6e_id_rsa.pub follow=False checksum=ad19e951a009809a91d74da158b058ce7df88458 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:59:08 localhost python3[18735]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:59:09 localhost python3[18781]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:59:09 localhost python3[18797]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmp14i9g_e6 recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:59:10 localhost python3[18857]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:59:11 localhost python3[18873]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpypscrmh9 recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:59:12 localhost python3[18933]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 01:59:12 localhost python3[18949]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmptx2asf_z recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 01:59:13 localhost systemd[1]: session-8.scope: Deactivated successfully. Feb 1 01:59:13 localhost systemd[1]: session-8.scope: Consumed 3.475s CPU time. Feb 1 01:59:13 localhost systemd-logind[761]: Session 8 logged out. Waiting for processes to exit. Feb 1 01:59:13 localhost systemd-logind[761]: Removed session 8. Feb 1 02:01:14 localhost sshd[18980]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:15 localhost systemd-logind[761]: New session 9 of user zuul. Feb 1 02:01:15 localhost systemd[1]: Started Session 9 of User zuul. Feb 1 02:01:15 localhost python3[19026]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:01:19 localhost sshd[19028]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:22 localhost sshd[19030]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:27 localhost sshd[19032]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:31 localhost sshd[19034]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:34 localhost sshd[19036]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:38 localhost sshd[19038]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:42 localhost sshd[19040]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:45 localhost sshd[19042]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:49 localhost sshd[19044]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:53 localhost sshd[19046]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:01:56 localhost sshd[19048]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:00 localhost sshd[19050]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:04 localhost sshd[19052]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:07 localhost sshd[19054]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:11 localhost sshd[19056]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:15 localhost sshd[19058]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:18 localhost sshd[19060]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:22 localhost sshd[19062]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:27 localhost sshd[19064]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:30 localhost sshd[19066]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:34 localhost sshd[19068]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:39 localhost sshd[19070]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:39 localhost sshd[19072]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:42 localhost sshd[19074]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:46 localhost sshd[19076]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:50 localhost sshd[19079]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:53 localhost sshd[19081]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:02:58 localhost sshd[19083]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:03:03 localhost sshd[19085]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:03:06 localhost sshd[19087]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:03:11 localhost sshd[19089]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:03:15 localhost sshd[19091]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:03:19 localhost sshd[19093]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:03:23 localhost sshd[19095]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:06:14 localhost systemd[1]: session-9.scope: Deactivated successfully. Feb 1 02:06:14 localhost systemd-logind[761]: Session 9 logged out. Waiting for processes to exit. Feb 1 02:06:14 localhost systemd-logind[761]: Removed session 9. Feb 1 02:12:50 localhost sshd[19101]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:12:50 localhost systemd-logind[761]: New session 10 of user zuul. Feb 1 02:12:50 localhost systemd[1]: Started Session 10 of User zuul. Feb 1 02:12:50 localhost python3[19118]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-37fb-68e8-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:12:53 localhost python3[19138]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-37fb-68e8-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:13:24 localhost python3[19157]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Feb 1 02:13:26 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:13:27 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:13:57 localhost python3[19315]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Feb 1 02:13:59 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:14:00 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:14:16 localhost python3[19456]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Feb 1 02:14:18 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:14:18 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:14:23 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:14:23 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:14:47 localhost python3[19733]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Feb 1 02:14:49 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:14:49 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:14:54 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:14:54 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:15:16 localhost python3[20069]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Feb 1 02:15:19 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:15:24 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:15:24 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:15:34 localhost python3[20465]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-37fb-68e8-000000000013-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:16:05 localhost python3[20484]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:16:26 localhost kernel: SELinux: Converting 487 SID table entries... Feb 1 02:16:26 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 02:16:26 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 02:16:26 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 02:16:26 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 02:16:26 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 02:16:26 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 02:16:26 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 02:16:26 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=4 res=1 Feb 1 02:16:26 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Feb 1 02:16:29 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 02:16:29 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 02:16:29 localhost systemd[1]: Reloading. Feb 1 02:16:29 localhost systemd-rc-local-generator[21148]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:16:29 localhost systemd-sysv-generator[21153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:16:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:16:30 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 02:16:30 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 02:16:30 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 02:16:30 localhost systemd[1]: run-r6a24df3bb2174e4892da2442aa4c6682.service: Deactivated successfully. Feb 1 02:16:31 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:16:31 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:16:43 localhost python3[21817]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-37fb-68e8-000000000015-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:16:55 localhost python3[21838]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:16:57 localhost python3[21886]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:16:57 localhost python3[21929]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769930217.093162-334-141113391887851/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=91bc45728dd9738fc644e3ada9d8642294da29ff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:16:59 localhost python3[21959]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 1 02:16:59 localhost systemd-journald[619]: Field hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation. Feb 1 02:16:59 localhost systemd-journald[619]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 02:16:59 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 02:16:59 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 02:16:59 localhost python3[21980]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 1 02:17:00 localhost python3[22000]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 1 02:17:00 localhost python3[22020]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 1 02:17:00 localhost python3[22040]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 1 02:17:03 localhost python3[22060]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 02:17:03 localhost systemd[1]: Starting LSB: Bring up/down networking... Feb 1 02:17:03 localhost network[22063]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 02:17:03 localhost network[22074]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 02:17:03 localhost network[22063]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:03 localhost network[22075]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:03 localhost network[22063]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Feb 1 02:17:03 localhost network[22076]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 02:17:03 localhost NetworkManager[5972]: [1769930223.9815] audit: op="connections-reload" pid=22104 uid=0 result="success" Feb 1 02:17:04 localhost network[22063]: Bringing up loopback interface: [ OK ] Feb 1 02:17:04 localhost NetworkManager[5972]: [1769930224.1976] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22192 uid=0 result="success" Feb 1 02:17:04 localhost network[22063]: Bringing up interface eth0: [ OK ] Feb 1 02:17:04 localhost systemd[1]: Started LSB: Bring up/down networking. Feb 1 02:17:04 localhost python3[22233]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 02:17:04 localhost systemd[1]: Starting Open vSwitch Database Unit... Feb 1 02:17:04 localhost chown[22237]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Feb 1 02:17:04 localhost ovs-ctl[22242]: /etc/openvswitch/conf.db does not exist ... (warning). Feb 1 02:17:04 localhost ovs-ctl[22242]: Creating empty database /etc/openvswitch/conf.db [ OK ] Feb 1 02:17:04 localhost ovs-ctl[22242]: Starting ovsdb-server [ OK ] Feb 1 02:17:04 localhost ovs-vsctl[22291]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Feb 1 02:17:05 localhost ovs-vsctl[22311]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"f18e6148-4a7e-452d-80cb-72c86b59e439\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Feb 1 02:17:05 localhost ovs-ctl[22242]: Configuring Open vSwitch system IDs [ OK ] Feb 1 02:17:05 localhost ovs-ctl[22242]: Enabling remote OVSDB managers [ OK ] Feb 1 02:17:05 localhost ovs-vsctl[22317]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005604215.novalocal Feb 1 02:17:05 localhost systemd[1]: Started Open vSwitch Database Unit. Feb 1 02:17:05 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Feb 1 02:17:05 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Feb 1 02:17:05 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Feb 1 02:17:05 localhost kernel: openvswitch: Open vSwitch switching datapath Feb 1 02:17:05 localhost ovs-ctl[22361]: Inserting openvswitch module [ OK ] Feb 1 02:17:05 localhost ovs-ctl[22330]: Starting ovs-vswitchd [ OK ] Feb 1 02:17:05 localhost ovs-vsctl[22380]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005604215.novalocal Feb 1 02:17:05 localhost ovs-ctl[22330]: Enabling remote OVSDB managers [ OK ] Feb 1 02:17:05 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Feb 1 02:17:05 localhost systemd[1]: Starting Open vSwitch... Feb 1 02:17:05 localhost systemd[1]: Finished Open vSwitch. Feb 1 02:17:35 localhost python3[22399]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-37fb-68e8-00000000001a-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:17:36 localhost NetworkManager[5972]: [1769930256.7370] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22596 uid=0 result="success" Feb 1 02:17:36 localhost ifup[22597]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:17:36 localhost ifup[22598]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:36 localhost ifup[22599]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:17:36 localhost NetworkManager[5972]: [1769930256.7679] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22605 uid=0 result="success" Feb 1 02:17:36 localhost ovs-vsctl[22607]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:17:45:d1 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Feb 1 02:17:36 localhost kernel: device ovs-system entered promiscuous mode Feb 1 02:17:36 localhost NetworkManager[5972]: [1769930256.7955] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Feb 1 02:17:36 localhost kernel: Timeout policy base is empty Feb 1 02:17:36 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Feb 1 02:17:36 localhost systemd-udevd[22608]: Network interface NamePolicy= disabled on kernel command line. Feb 1 02:17:36 localhost kernel: device br-ex entered promiscuous mode Feb 1 02:17:36 localhost systemd-udevd[22622]: Network interface NamePolicy= disabled on kernel command line. Feb 1 02:17:36 localhost NetworkManager[5972]: [1769930256.8385] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Feb 1 02:17:36 localhost NetworkManager[5972]: [1769930256.8663] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22633 uid=0 result="success" Feb 1 02:17:36 localhost NetworkManager[5972]: [1769930256.8869] device (br-ex): carrier: link connected Feb 1 02:17:39 localhost NetworkManager[5972]: [1769930259.9429] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22662 uid=0 result="success" Feb 1 02:17:39 localhost NetworkManager[5972]: [1769930259.9895] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22677 uid=0 result="success" Feb 1 02:17:40 localhost NET[22702]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Feb 1 02:17:40 localhost NetworkManager[5972]: [1769930260.0770] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Feb 1 02:17:40 localhost NetworkManager[5972]: [1769930260.0883] dhcp4 (eth1): canceled DHCP transaction Feb 1 02:17:40 localhost NetworkManager[5972]: [1769930260.0883] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Feb 1 02:17:40 localhost NetworkManager[5972]: [1769930260.0883] dhcp4 (eth1): state changed no lease Feb 1 02:17:40 localhost NetworkManager[5972]: [1769930260.0916] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22711 uid=0 result="success" Feb 1 02:17:40 localhost ifup[22712]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:17:40 localhost ifup[22713]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:40 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 1 02:17:40 localhost ifup[22714]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:17:40 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 1 02:17:40 localhost NetworkManager[5972]: [1769930260.1273] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22728 uid=0 result="success" Feb 1 02:17:40 localhost NetworkManager[5972]: [1769930260.2110] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22739 uid=0 result="success" Feb 1 02:17:40 localhost NetworkManager[5972]: [1769930260.2191] device (eth1): carrier: link connected Feb 1 02:17:40 localhost NetworkManager[5972]: [1769930260.2416] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22748 uid=0 result="success" Feb 1 02:17:40 localhost ipv6_wait_tentative[22760]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Feb 1 02:17:41 localhost ipv6_wait_tentative[22765]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Feb 1 02:17:42 localhost NetworkManager[5972]: [1769930262.3067] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22774 uid=0 result="success" Feb 1 02:17:42 localhost ovs-vsctl[22789]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Feb 1 02:17:42 localhost kernel: device eth1 entered promiscuous mode Feb 1 02:17:42 localhost NetworkManager[5972]: [1769930262.3766] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22797 uid=0 result="success" Feb 1 02:17:42 localhost ifup[22798]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:17:42 localhost ifup[22799]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:42 localhost ifup[22800]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:17:42 localhost NetworkManager[5972]: [1769930262.4075] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22806 uid=0 result="success" Feb 1 02:17:42 localhost NetworkManager[5972]: [1769930262.4498] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22816 uid=0 result="success" Feb 1 02:17:42 localhost ifup[22817]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:17:42 localhost ifup[22818]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:42 localhost ifup[22819]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:17:42 localhost NetworkManager[5972]: [1769930262.4805] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22825 uid=0 result="success" Feb 1 02:17:42 localhost ovs-vsctl[22828]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Feb 1 02:17:42 localhost kernel: device vlan22 entered promiscuous mode Feb 1 02:17:42 localhost NetworkManager[5972]: [1769930262.5199] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Feb 1 02:17:42 localhost systemd-udevd[22830]: Network interface NamePolicy= disabled on kernel command line. Feb 1 02:17:42 localhost NetworkManager[5972]: [1769930262.5431] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22839 uid=0 result="success" Feb 1 02:17:42 localhost NetworkManager[5972]: [1769930262.5639] device (vlan22): carrier: link connected Feb 1 02:17:45 localhost NetworkManager[5972]: [1769930265.6325] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22868 uid=0 result="success" Feb 1 02:17:45 localhost NetworkManager[5972]: [1769930265.6737] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22883 uid=0 result="success" Feb 1 02:17:45 localhost NetworkManager[5972]: [1769930265.7306] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22904 uid=0 result="success" Feb 1 02:17:45 localhost ifup[22905]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:17:45 localhost ifup[22906]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:45 localhost ifup[22907]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:17:45 localhost NetworkManager[5972]: [1769930265.7616] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22913 uid=0 result="success" Feb 1 02:17:45 localhost ovs-vsctl[22916]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Feb 1 02:17:45 localhost systemd-udevd[22918]: Network interface NamePolicy= disabled on kernel command line. Feb 1 02:17:45 localhost NetworkManager[5972]: [1769930265.8032] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Feb 1 02:17:45 localhost kernel: device vlan20 entered promiscuous mode Feb 1 02:17:45 localhost NetworkManager[5972]: [1769930265.8297] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22928 uid=0 result="success" Feb 1 02:17:45 localhost NetworkManager[5972]: [1769930265.8496] device (vlan20): carrier: link connected Feb 1 02:17:48 localhost NetworkManager[5972]: [1769930268.9072] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22958 uid=0 result="success" Feb 1 02:17:48 localhost NetworkManager[5972]: [1769930268.9531] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22973 uid=0 result="success" Feb 1 02:17:49 localhost NetworkManager[5972]: [1769930269.0062] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22994 uid=0 result="success" Feb 1 02:17:49 localhost ifup[22995]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:17:49 localhost ifup[22996]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:49 localhost ifup[22997]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:17:49 localhost NetworkManager[5972]: [1769930269.0363] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23003 uid=0 result="success" Feb 1 02:17:49 localhost ovs-vsctl[23006]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Feb 1 02:17:49 localhost kernel: device vlan44 entered promiscuous mode Feb 1 02:17:49 localhost NetworkManager[5972]: [1769930269.1099] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Feb 1 02:17:49 localhost systemd-udevd[23009]: Network interface NamePolicy= disabled on kernel command line. Feb 1 02:17:49 localhost NetworkManager[5972]: [1769930269.1273] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23018 uid=0 result="success" Feb 1 02:17:49 localhost NetworkManager[5972]: [1769930269.1441] device (vlan44): carrier: link connected Feb 1 02:17:50 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 1 02:17:52 localhost NetworkManager[5972]: [1769930272.1978] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23049 uid=0 result="success" Feb 1 02:17:52 localhost NetworkManager[5972]: [1769930272.2397] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23064 uid=0 result="success" Feb 1 02:17:52 localhost NetworkManager[5972]: [1769930272.2964] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23085 uid=0 result="success" Feb 1 02:17:52 localhost ifup[23086]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:17:52 localhost ifup[23087]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:52 localhost ifup[23088]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:17:52 localhost NetworkManager[5972]: [1769930272.3256] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23094 uid=0 result="success" Feb 1 02:17:52 localhost ovs-vsctl[23097]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Feb 1 02:17:52 localhost kernel: device vlan23 entered promiscuous mode Feb 1 02:17:52 localhost NetworkManager[5972]: [1769930272.3636] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Feb 1 02:17:52 localhost systemd-udevd[23099]: Network interface NamePolicy= disabled on kernel command line. Feb 1 02:17:52 localhost NetworkManager[5972]: [1769930272.3909] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23109 uid=0 result="success" Feb 1 02:17:52 localhost NetworkManager[5972]: [1769930272.4115] device (vlan23): carrier: link connected Feb 1 02:17:55 localhost NetworkManager[5972]: [1769930275.4716] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23139 uid=0 result="success" Feb 1 02:17:55 localhost NetworkManager[5972]: [1769930275.5206] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23154 uid=0 result="success" Feb 1 02:17:55 localhost NetworkManager[5972]: [1769930275.5838] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23175 uid=0 result="success" Feb 1 02:17:55 localhost ifup[23176]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:17:55 localhost ifup[23177]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:55 localhost ifup[23178]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:17:55 localhost NetworkManager[5972]: [1769930275.6153] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23184 uid=0 result="success" Feb 1 02:17:55 localhost ovs-vsctl[23187]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Feb 1 02:17:55 localhost systemd-udevd[23189]: Network interface NamePolicy= disabled on kernel command line. Feb 1 02:17:55 localhost kernel: device vlan21 entered promiscuous mode Feb 1 02:17:55 localhost NetworkManager[5972]: [1769930275.6576] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Feb 1 02:17:55 localhost NetworkManager[5972]: [1769930275.6840] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23199 uid=0 result="success" Feb 1 02:17:55 localhost NetworkManager[5972]: [1769930275.7063] device (vlan21): carrier: link connected Feb 1 02:17:58 localhost NetworkManager[5972]: [1769930278.7584] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23229 uid=0 result="success" Feb 1 02:17:58 localhost NetworkManager[5972]: [1769930278.8053] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23244 uid=0 result="success" Feb 1 02:17:58 localhost NetworkManager[5972]: [1769930278.8658] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23265 uid=0 result="success" Feb 1 02:17:58 localhost ifup[23266]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:17:58 localhost ifup[23267]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:17:58 localhost ifup[23268]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:17:58 localhost NetworkManager[5972]: [1769930278.8986] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23274 uid=0 result="success" Feb 1 02:17:58 localhost ovs-vsctl[23277]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Feb 1 02:17:58 localhost NetworkManager[5972]: [1769930278.9620] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23284 uid=0 result="success" Feb 1 02:18:00 localhost NetworkManager[5972]: [1769930280.0213] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23311 uid=0 result="success" Feb 1 02:18:00 localhost NetworkManager[5972]: [1769930280.0686] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23326 uid=0 result="success" Feb 1 02:18:00 localhost NetworkManager[5972]: [1769930280.1280] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23347 uid=0 result="success" Feb 1 02:18:00 localhost ifup[23348]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:18:00 localhost ifup[23349]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:18:00 localhost ifup[23350]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:18:00 localhost NetworkManager[5972]: [1769930280.1597] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23356 uid=0 result="success" Feb 1 02:18:00 localhost ovs-vsctl[23359]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Feb 1 02:18:00 localhost NetworkManager[5972]: [1769930280.2148] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23366 uid=0 result="success" Feb 1 02:18:01 localhost NetworkManager[5972]: [1769930281.2793] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23394 uid=0 result="success" Feb 1 02:18:01 localhost NetworkManager[5972]: [1769930281.3262] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23409 uid=0 result="success" Feb 1 02:18:01 localhost NetworkManager[5972]: [1769930281.3843] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23430 uid=0 result="success" Feb 1 02:18:01 localhost ifup[23431]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:18:01 localhost ifup[23432]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:18:01 localhost ifup[23433]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:18:01 localhost NetworkManager[5972]: [1769930281.4156] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23439 uid=0 result="success" Feb 1 02:18:01 localhost ovs-vsctl[23442]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Feb 1 02:18:01 localhost NetworkManager[5972]: [1769930281.4751] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23449 uid=0 result="success" Feb 1 02:18:02 localhost NetworkManager[5972]: [1769930282.5333] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23477 uid=0 result="success" Feb 1 02:18:02 localhost NetworkManager[5972]: [1769930282.5773] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23492 uid=0 result="success" Feb 1 02:18:02 localhost NetworkManager[5972]: [1769930282.6291] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23513 uid=0 result="success" Feb 1 02:18:02 localhost ifup[23514]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:18:02 localhost ifup[23515]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:18:02 localhost ifup[23516]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:18:02 localhost NetworkManager[5972]: [1769930282.6587] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23522 uid=0 result="success" Feb 1 02:18:02 localhost ovs-vsctl[23525]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Feb 1 02:18:02 localhost NetworkManager[5972]: [1769930282.7420] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23532 uid=0 result="success" Feb 1 02:18:03 localhost NetworkManager[5972]: [1769930283.7972] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23560 uid=0 result="success" Feb 1 02:18:03 localhost NetworkManager[5972]: [1769930283.8419] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23575 uid=0 result="success" Feb 1 02:18:03 localhost NetworkManager[5972]: [1769930283.9013] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23596 uid=0 result="success" Feb 1 02:18:03 localhost ifup[23597]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 1 02:18:03 localhost ifup[23598]: 'network-scripts' will be removed from distribution in near future. Feb 1 02:18:03 localhost ifup[23599]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 1 02:18:03 localhost NetworkManager[5972]: [1769930283.9294] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23605 uid=0 result="success" Feb 1 02:18:03 localhost ovs-vsctl[23608]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Feb 1 02:18:04 localhost NetworkManager[5972]: [1769930284.0133] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23615 uid=0 result="success" Feb 1 02:18:05 localhost NetworkManager[5972]: [1769930285.0725] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23643 uid=0 result="success" Feb 1 02:18:05 localhost NetworkManager[5972]: [1769930285.1184] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23658 uid=0 result="success" Feb 1 02:18:30 localhost python3[23690]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-37fb-68e8-00000000001b-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:18:34 localhost python3[23709]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3PGk8eQ/HLnsyRzL8h5NfdCJluWZpaEZ6rXdDgbE0lw7uxHSFELY1SydQI6S9XXHDytudbXCsaTQesdKbdbGdHJj+Vg1gGMFqFoy4uSiBfcCXPrrCkLSus2YLNROASby08xEMRmyMtENrRZxLnhhab6au+uTTgjrzCQYER0PPqsmRyQSw+7T1mpjYdlu7KIQYLe0QTYZWg9qnRz3OQs3ed297w+gXNzQDadWOmWRrqVrG76umhtGZrmJCY+I0xUANvOtiQSFT89RlUBKK2jyA9a/TXr/TBu9+r7PJ/Y4ayoabn3z0m1V8WEY0u5V2/k3yqFndPYU//bBN0nlq90J+EMZPG7yU8fXbmL3KQQG9wWh4grfR0sRjBLd3o2eYVr2minX8gho1p+AosyJZ8aSpq86KLny3WC9JVc4/RqUWVvQ34IbOKg2Ef1+HJDFpRGvPN6pvTfUfBHSYnk3sX22e11wLjEi2Z+2kffa1GY++d6pvqQLop2x0re8+mhNTRaE= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 02:18:35 localhost python3[23725]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3PGk8eQ/HLnsyRzL8h5NfdCJluWZpaEZ6rXdDgbE0lw7uxHSFELY1SydQI6S9XXHDytudbXCsaTQesdKbdbGdHJj+Vg1gGMFqFoy4uSiBfcCXPrrCkLSus2YLNROASby08xEMRmyMtENrRZxLnhhab6au+uTTgjrzCQYER0PPqsmRyQSw+7T1mpjYdlu7KIQYLe0QTYZWg9qnRz3OQs3ed297w+gXNzQDadWOmWRrqVrG76umhtGZrmJCY+I0xUANvOtiQSFT89RlUBKK2jyA9a/TXr/TBu9+r7PJ/Y4ayoabn3z0m1V8WEY0u5V2/k3yqFndPYU//bBN0nlq90J+EMZPG7yU8fXbmL3KQQG9wWh4grfR0sRjBLd3o2eYVr2minX8gho1p+AosyJZ8aSpq86KLny3WC9JVc4/RqUWVvQ34IbOKg2Ef1+HJDFpRGvPN6pvTfUfBHSYnk3sX22e11wLjEi2Z+2kffa1GY++d6pvqQLop2x0re8+mhNTRaE= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 02:18:36 localhost python3[23739]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3PGk8eQ/HLnsyRzL8h5NfdCJluWZpaEZ6rXdDgbE0lw7uxHSFELY1SydQI6S9XXHDytudbXCsaTQesdKbdbGdHJj+Vg1gGMFqFoy4uSiBfcCXPrrCkLSus2YLNROASby08xEMRmyMtENrRZxLnhhab6au+uTTgjrzCQYER0PPqsmRyQSw+7T1mpjYdlu7KIQYLe0QTYZWg9qnRz3OQs3ed297w+gXNzQDadWOmWRrqVrG76umhtGZrmJCY+I0xUANvOtiQSFT89RlUBKK2jyA9a/TXr/TBu9+r7PJ/Y4ayoabn3z0m1V8WEY0u5V2/k3yqFndPYU//bBN0nlq90J+EMZPG7yU8fXbmL3KQQG9wWh4grfR0sRjBLd3o2eYVr2minX8gho1p+AosyJZ8aSpq86KLny3WC9JVc4/RqUWVvQ34IbOKg2Ef1+HJDFpRGvPN6pvTfUfBHSYnk3sX22e11wLjEi2Z+2kffa1GY++d6pvqQLop2x0re8+mhNTRaE= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 02:18:37 localhost python3[23755]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3PGk8eQ/HLnsyRzL8h5NfdCJluWZpaEZ6rXdDgbE0lw7uxHSFELY1SydQI6S9XXHDytudbXCsaTQesdKbdbGdHJj+Vg1gGMFqFoy4uSiBfcCXPrrCkLSus2YLNROASby08xEMRmyMtENrRZxLnhhab6au+uTTgjrzCQYER0PPqsmRyQSw+7T1mpjYdlu7KIQYLe0QTYZWg9qnRz3OQs3ed297w+gXNzQDadWOmWRrqVrG76umhtGZrmJCY+I0xUANvOtiQSFT89RlUBKK2jyA9a/TXr/TBu9+r7PJ/Y4ayoabn3z0m1V8WEY0u5V2/k3yqFndPYU//bBN0nlq90J+EMZPG7yU8fXbmL3KQQG9wWh4grfR0sRjBLd3o2eYVr2minX8gho1p+AosyJZ8aSpq86KLny3WC9JVc4/RqUWVvQ34IbOKg2Ef1+HJDFpRGvPN6pvTfUfBHSYnk3sX22e11wLjEi2Z+2kffa1GY++d6pvqQLop2x0re8+mhNTRaE= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 1 02:18:37 localhost python3[23769]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Feb 1 02:18:38 localhost python3[23784]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005604215.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-37fb-68e8-000000000022-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:18:39 localhost python3[23804]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-37fb-68e8-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:18:39 localhost systemd[1]: Starting Hostname Service... Feb 1 02:18:39 localhost systemd[1]: Started Hostname Service. Feb 1 02:18:39 localhost systemd-hostnamed[23808]: Hostname set to (static) Feb 1 02:18:39 localhost NetworkManager[5972]: [1769930319.5568] hostname: static hostname changed from "np0005604215.novalocal" to "np0005604215.localdomain" Feb 1 02:18:39 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 1 02:18:39 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 1 02:18:40 localhost systemd[1]: session-10.scope: Deactivated successfully. Feb 1 02:18:40 localhost systemd[1]: session-10.scope: Consumed 1min 43.599s CPU time. Feb 1 02:18:40 localhost systemd-logind[761]: Session 10 logged out. Waiting for processes to exit. Feb 1 02:18:40 localhost systemd-logind[761]: Removed session 10. Feb 1 02:18:43 localhost sshd[23819]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:18:43 localhost systemd-logind[761]: New session 11 of user zuul. Feb 1 02:18:43 localhost systemd[1]: Started Session 11 of User zuul. Feb 1 02:18:44 localhost python3[23836]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Feb 1 02:18:45 localhost systemd[1]: session-11.scope: Deactivated successfully. Feb 1 02:18:45 localhost systemd-logind[761]: Session 11 logged out. Waiting for processes to exit. Feb 1 02:18:45 localhost systemd-logind[761]: Removed session 11. Feb 1 02:18:49 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 1 02:19:09 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 1 02:19:37 localhost sshd[23840]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:19:38 localhost systemd-logind[761]: New session 12 of user zuul. Feb 1 02:19:38 localhost systemd[1]: Started Session 12 of User zuul. Feb 1 02:19:38 localhost python3[23859]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:19:42 localhost systemd[1]: Reloading. Feb 1 02:19:42 localhost systemd-rc-local-generator[23901]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:19:42 localhost systemd-sysv-generator[23906]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:19:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:19:42 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Feb 1 02:19:42 localhost systemd[1]: Reloading. Feb 1 02:19:42 localhost systemd-rc-local-generator[23941]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:19:42 localhost systemd-sysv-generator[23946]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:19:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:19:42 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 1 02:19:42 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 1 02:19:42 localhost systemd[1]: Reloading. Feb 1 02:19:42 localhost systemd-rc-local-generator[23984]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:19:42 localhost systemd-sysv-generator[23987]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:19:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:19:43 localhost systemd[1]: Listening on LVM2 poll daemon socket. Feb 1 02:19:43 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 02:19:43 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 02:19:43 localhost systemd[1]: Reloading. Feb 1 02:19:43 localhost systemd-rc-local-generator[24041]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:19:43 localhost systemd-sysv-generator[24046]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:19:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:19:43 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 02:19:43 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 02:19:44 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 02:19:44 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 02:19:44 localhost systemd[1]: run-ra9f44e288ab74c8fad21591d94c1d7d8.service: Deactivated successfully. Feb 1 02:19:44 localhost systemd[1]: run-r80b7093d53e24c90b03922db9ba7f157.service: Deactivated successfully. Feb 1 02:20:44 localhost systemd[1]: session-12.scope: Deactivated successfully. Feb 1 02:20:44 localhost systemd[1]: session-12.scope: Consumed 4.721s CPU time. Feb 1 02:20:44 localhost systemd-logind[761]: Session 12 logged out. Waiting for processes to exit. Feb 1 02:20:44 localhost systemd-logind[761]: Removed session 12. Feb 1 02:21:36 localhost sshd[24631]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:36:01 localhost sshd[24637]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:36:01 localhost systemd-logind[761]: New session 13 of user zuul. Feb 1 02:36:01 localhost systemd[1]: Started Session 13 of User zuul. Feb 1 02:36:01 localhost python3[24685]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 02:36:03 localhost python3[24772]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:36:06 localhost python3[24789]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:36:07 localhost python3[24805]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:36:07 localhost kernel: loop: module loaded Feb 1 02:36:07 localhost kernel: loop3: detected capacity change from 0 to 14680064 Feb 1 02:36:07 localhost python3[24830]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:36:07 localhost lvm[24833]: PV /dev/loop3 not used. Feb 1 02:36:08 localhost lvm[24835]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 1 02:36:08 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Feb 1 02:36:08 localhost lvm[24844]: 1 logical volume(s) in volume group "ceph_vg0" now active Feb 1 02:36:08 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Feb 1 02:36:08 localhost python3[24892]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:36:09 localhost python3[24935]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769931368.4223719-54355-269898621379971/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:36:10 localhost python3[24965]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:36:10 localhost systemd[1]: Reloading. Feb 1 02:36:10 localhost systemd-rc-local-generator[24990]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:36:10 localhost systemd-sysv-generator[24993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:36:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:36:10 localhost systemd[1]: Starting Ceph OSD losetup... Feb 1 02:36:10 localhost bash[25007]: /dev/loop3: [64516]:8399529 (/var/lib/ceph-osd-0.img) Feb 1 02:36:10 localhost systemd[1]: Finished Ceph OSD losetup. Feb 1 02:36:10 localhost lvm[25008]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 1 02:36:10 localhost lvm[25008]: VG ceph_vg0 finished Feb 1 02:36:10 localhost python3[25024]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:36:13 localhost python3[25041]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:36:14 localhost python3[25057]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:36:14 localhost kernel: loop4: detected capacity change from 0 to 14680064 Feb 1 02:36:14 localhost python3[25079]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:36:15 localhost lvm[25082]: PV /dev/loop4 not used. Feb 1 02:36:15 localhost lvm[25092]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 1 02:36:15 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Feb 1 02:36:15 localhost lvm[25094]: 1 logical volume(s) in volume group "ceph_vg1" now active Feb 1 02:36:15 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Feb 1 02:36:15 localhost python3[25142]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:36:16 localhost python3[25185]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769931375.4517767-54529-197489223700253/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:36:16 localhost python3[25215]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:36:17 localhost systemd[1]: Reloading. Feb 1 02:36:17 localhost systemd-sysv-generator[25247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:36:17 localhost systemd-rc-local-generator[25243]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:36:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:36:18 localhost systemd[1]: Starting Ceph OSD losetup... Feb 1 02:36:18 localhost bash[25256]: /dev/loop4: [64516]:9171997 (/var/lib/ceph-osd-1.img) Feb 1 02:36:18 localhost systemd[1]: Finished Ceph OSD losetup. Feb 1 02:36:18 localhost lvm[25257]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 1 02:36:18 localhost lvm[25257]: VG ceph_vg1 finished Feb 1 02:36:26 localhost python3[25302]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Feb 1 02:36:27 localhost python3[25322]: ansible-hostname Invoked with name=np0005604215.localdomain use=None Feb 1 02:36:27 localhost systemd[1]: Starting Hostname Service... Feb 1 02:36:27 localhost systemd[1]: Started Hostname Service. Feb 1 02:36:29 localhost python3[25345]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Feb 1 02:36:30 localhost python3[25393]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.6b1777aptmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:36:30 localhost python3[25423]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.6b1777aptmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:36:31 localhost python3[25439]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.6b1777aptmphosts insertbefore=BOF block=192.168.122.106 np0005604212.localdomain np0005604212#012192.168.122.106 np0005604212.ctlplane.localdomain np0005604212.ctlplane#012192.168.122.107 np0005604213.localdomain np0005604213#012192.168.122.107 np0005604213.ctlplane.localdomain np0005604213.ctlplane#012192.168.122.108 np0005604215.localdomain np0005604215#012192.168.122.108 np0005604215.ctlplane.localdomain np0005604215.ctlplane#012192.168.122.103 np0005604209.localdomain np0005604209#012192.168.122.103 np0005604209.ctlplane.localdomain np0005604209.ctlplane#012192.168.122.104 np0005604210.localdomain np0005604210#012192.168.122.104 np0005604210.ctlplane.localdomain np0005604210.ctlplane#012192.168.122.105 np0005604211.localdomain np0005604211#012192.168.122.105 np0005604211.ctlplane.localdomain np0005604211.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:36:32 localhost python3[25455]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.6b1777aptmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:36:32 localhost python3[25472]: ansible-file Invoked with path=/tmp/ansible.6b1777aptmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:36:34 localhost python3[25488]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:36:35 localhost python3[25506]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:36:39 localhost python3[25555]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:36:40 localhost python3[25600]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769931399.478967-55383-266207153660309/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:36:41 localhost python3[25630]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:36:43 localhost python3[25648]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 02:36:43 localhost chronyd[767]: chronyd exiting Feb 1 02:36:43 localhost systemd[1]: Stopping NTP client/server... Feb 1 02:36:43 localhost systemd[1]: chronyd.service: Deactivated successfully. Feb 1 02:36:43 localhost systemd[1]: Stopped NTP client/server. Feb 1 02:36:43 localhost systemd[1]: chronyd.service: Consumed 80ms CPU time, read 1.9M from disk, written 0B to disk. Feb 1 02:36:43 localhost systemd[1]: Starting NTP client/server... Feb 1 02:36:43 localhost chronyd[25656]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 1 02:36:43 localhost chronyd[25656]: Frequency -30.640 +/- 0.062 ppm read from /var/lib/chrony/drift Feb 1 02:36:43 localhost chronyd[25656]: Loaded seccomp filter (level 2) Feb 1 02:36:43 localhost systemd[1]: Started NTP client/server. Feb 1 02:36:45 localhost python3[25705]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:36:45 localhost python3[25748]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769931404.7074661-55614-206577621577627/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:36:46 localhost python3[25778]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:36:46 localhost systemd[1]: Reloading. Feb 1 02:36:46 localhost systemd-rc-local-generator[25801]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:36:46 localhost systemd-sysv-generator[25806]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:36:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:36:46 localhost systemd[1]: Reloading. Feb 1 02:36:46 localhost systemd-sysv-generator[25844]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:36:46 localhost systemd-rc-local-generator[25841]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:36:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:36:46 localhost systemd[1]: Starting chronyd online sources service... Feb 1 02:36:46 localhost chronyc[25853]: 200 OK Feb 1 02:36:46 localhost systemd[1]: chrony-online.service: Deactivated successfully. Feb 1 02:36:46 localhost systemd[1]: Finished chronyd online sources service. Feb 1 02:36:47 localhost python3[25869]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:36:47 localhost chronyd[25656]: System clock was stepped by 0.000000 seconds Feb 1 02:36:47 localhost chronyd[25656]: Selected source 216.197.156.83 (pool.ntp.org) Feb 1 02:36:47 localhost python3[25886]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:36:57 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 1 02:36:58 localhost python3[25906]: ansible-timezone Invoked with name=UTC hwclock=None Feb 1 02:36:58 localhost systemd[1]: Starting Time & Date Service... Feb 1 02:36:58 localhost systemd[1]: Started Time & Date Service. Feb 1 02:36:59 localhost python3[25926]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 02:36:59 localhost systemd[1]: Stopping NTP client/server... Feb 1 02:36:59 localhost chronyd[25656]: chronyd exiting Feb 1 02:36:59 localhost systemd[1]: chronyd.service: Deactivated successfully. Feb 1 02:36:59 localhost systemd[1]: Stopped NTP client/server. Feb 1 02:36:59 localhost systemd[1]: Starting NTP client/server... Feb 1 02:36:59 localhost chronyd[25933]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 1 02:36:59 localhost chronyd[25933]: Frequency -30.640 +/- 0.062 ppm read from /var/lib/chrony/drift Feb 1 02:36:59 localhost chronyd[25933]: Loaded seccomp filter (level 2) Feb 1 02:36:59 localhost systemd[1]: Started NTP client/server. Feb 1 02:37:03 localhost chronyd[25933]: Selected source 216.197.156.83 (pool.ntp.org) Feb 1 02:37:28 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 1 02:39:07 localhost sshd[26130]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:07 localhost systemd-logind[761]: New session 14 of user ceph-admin. Feb 1 02:39:07 localhost systemd[1]: Created slice User Slice of UID 1002. Feb 1 02:39:07 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Feb 1 02:39:07 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Feb 1 02:39:07 localhost systemd[1]: Starting User Manager for UID 1002... Feb 1 02:39:07 localhost sshd[26148]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:07 localhost systemd[26134]: Queued start job for default target Main User Target. Feb 1 02:39:07 localhost systemd[26134]: Created slice User Application Slice. Feb 1 02:39:07 localhost systemd[26134]: Started Mark boot as successful after the user session has run 2 minutes. Feb 1 02:39:07 localhost systemd[26134]: Started Daily Cleanup of User's Temporary Directories. Feb 1 02:39:07 localhost systemd[26134]: Reached target Paths. Feb 1 02:39:07 localhost systemd[26134]: Reached target Timers. Feb 1 02:39:07 localhost systemd[26134]: Starting D-Bus User Message Bus Socket... Feb 1 02:39:07 localhost systemd[26134]: Starting Create User's Volatile Files and Directories... Feb 1 02:39:07 localhost systemd[26134]: Finished Create User's Volatile Files and Directories. Feb 1 02:39:07 localhost systemd[26134]: Listening on D-Bus User Message Bus Socket. Feb 1 02:39:07 localhost systemd[26134]: Reached target Sockets. Feb 1 02:39:07 localhost systemd[26134]: Reached target Basic System. Feb 1 02:39:07 localhost systemd[26134]: Reached target Main User Target. Feb 1 02:39:07 localhost systemd[26134]: Startup finished in 113ms. Feb 1 02:39:07 localhost systemd[1]: Started User Manager for UID 1002. Feb 1 02:39:07 localhost systemd[1]: Started Session 14 of User ceph-admin. Feb 1 02:39:07 localhost systemd-logind[761]: New session 16 of user ceph-admin. Feb 1 02:39:07 localhost systemd[1]: Started Session 16 of User ceph-admin. Feb 1 02:39:07 localhost sshd[26170]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:07 localhost systemd-logind[761]: New session 17 of user ceph-admin. Feb 1 02:39:07 localhost systemd[1]: Started Session 17 of User ceph-admin. Feb 1 02:39:08 localhost sshd[26189]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:08 localhost systemd-logind[761]: New session 18 of user ceph-admin. Feb 1 02:39:08 localhost systemd[1]: Started Session 18 of User ceph-admin. Feb 1 02:39:08 localhost sshd[26208]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:08 localhost systemd-logind[761]: New session 19 of user ceph-admin. Feb 1 02:39:08 localhost systemd[1]: Started Session 19 of User ceph-admin. Feb 1 02:39:08 localhost sshd[26227]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:08 localhost systemd-logind[761]: New session 20 of user ceph-admin. Feb 1 02:39:08 localhost systemd[1]: Started Session 20 of User ceph-admin. Feb 1 02:39:09 localhost sshd[26246]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:09 localhost systemd-logind[761]: New session 21 of user ceph-admin. Feb 1 02:39:09 localhost systemd[1]: Started Session 21 of User ceph-admin. Feb 1 02:39:09 localhost sshd[26265]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:09 localhost systemd-logind[761]: New session 22 of user ceph-admin. Feb 1 02:39:09 localhost systemd[1]: Started Session 22 of User ceph-admin. Feb 1 02:39:10 localhost sshd[26284]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:10 localhost systemd-logind[761]: New session 23 of user ceph-admin. Feb 1 02:39:10 localhost systemd[1]: Started Session 23 of User ceph-admin. Feb 1 02:39:10 localhost sshd[26303]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:10 localhost systemd-logind[761]: New session 24 of user ceph-admin. Feb 1 02:39:10 localhost systemd[1]: Started Session 24 of User ceph-admin. Feb 1 02:39:11 localhost sshd[26320]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:11 localhost systemd-logind[761]: New session 25 of user ceph-admin. Feb 1 02:39:11 localhost systemd[1]: Started Session 25 of User ceph-admin. Feb 1 02:39:11 localhost sshd[26339]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:39:11 localhost systemd-logind[761]: New session 26 of user ceph-admin. Feb 1 02:39:11 localhost systemd[1]: Started Session 26 of User ceph-admin. Feb 1 02:39:11 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:39:26 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:39:26 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:39:27 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:39:27 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:39:27 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26553 (sysctl) Feb 1 02:39:27 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Feb 1 02:39:27 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Feb 1 02:39:28 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:39:28 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:39:28 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:39:31 localhost kernel: VFS: idmapped mount is not enabled. Feb 1 02:39:49 localhost podman[26692]: 2026-02-01 07:39:28.801276696 +0000 UTC m=+0.040513841 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:39:49 localhost podman[26692]: Feb 1 02:39:50 localhost podman[26692]: 2026-02-01 07:39:50.432319543 +0000 UTC m=+21.671556608 container create 26cd8696c19d2fe4af77843d622f2c78f40c90e03746036000841490c3881723 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_austin, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Feb 1 02:39:50 localhost systemd[1]: Created slice Slice /machine. Feb 1 02:39:50 localhost systemd[1]: Started libpod-conmon-26cd8696c19d2fe4af77843d622f2c78f40c90e03746036000841490c3881723.scope. Feb 1 02:39:50 localhost systemd[1]: Started libcrun container. Feb 1 02:39:50 localhost podman[26692]: 2026-02-01 07:39:50.548509657 +0000 UTC m=+21.787746722 container init 26cd8696c19d2fe4af77843d622f2c78f40c90e03746036000841490c3881723 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_austin, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, description=Red Hat Ceph Storage 7, release=1764794109, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 02:39:50 localhost systemd[1]: tmp-crun.4WsTGQ.mount: Deactivated successfully. Feb 1 02:39:50 localhost podman[26692]: 2026-02-01 07:39:50.562202303 +0000 UTC m=+21.801439368 container start 26cd8696c19d2fe4af77843d622f2c78f40c90e03746036000841490c3881723 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_austin, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, name=rhceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, vcs-type=git, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 02:39:50 localhost podman[26692]: 2026-02-01 07:39:50.562564264 +0000 UTC m=+21.801801369 container attach 26cd8696c19d2fe4af77843d622f2c78f40c90e03746036000841490c3881723 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_austin, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True) Feb 1 02:39:50 localhost systemd[1]: libpod-26cd8696c19d2fe4af77843d622f2c78f40c90e03746036000841490c3881723.scope: Deactivated successfully. Feb 1 02:39:50 localhost admiring_austin[26788]: 167 167 Feb 1 02:39:50 localhost podman[26692]: 2026-02-01 07:39:50.567275151 +0000 UTC m=+21.806512216 container died 26cd8696c19d2fe4af77843d622f2c78f40c90e03746036000841490c3881723 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_austin, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1764794109, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph) Feb 1 02:39:50 localhost podman[26793]: 2026-02-01 07:39:50.650408556 +0000 UTC m=+0.074104505 container remove 26cd8696c19d2fe4af77843d622f2c78f40c90e03746036000841490c3881723 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_austin, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Feb 1 02:39:50 localhost systemd[1]: libpod-conmon-26cd8696c19d2fe4af77843d622f2c78f40c90e03746036000841490c3881723.scope: Deactivated successfully. Feb 1 02:39:50 localhost podman[26815]: Feb 1 02:39:50 localhost podman[26815]: 2026-02-01 07:39:50.87171003 +0000 UTC m=+0.071327809 container create 9c8d3ecf4b7691ff3f6e17386e9b700fc03ce3ac68bc634818365f1c2bcf4c0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_noether, RELEASE=main, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, distribution-scope=public, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4) Feb 1 02:39:50 localhost systemd[1]: Started libpod-conmon-9c8d3ecf4b7691ff3f6e17386e9b700fc03ce3ac68bc634818365f1c2bcf4c0e.scope. Feb 1 02:39:50 localhost systemd[1]: Started libcrun container. Feb 1 02:39:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fec8d8a73a794d0b0aed5d06b9a5c8b7e21c597c0eef0f1ee2709d3c9c171f6/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:39:50 localhost podman[26815]: 2026-02-01 07:39:50.845290549 +0000 UTC m=+0.044908328 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:39:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fec8d8a73a794d0b0aed5d06b9a5c8b7e21c597c0eef0f1ee2709d3c9c171f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:39:50 localhost podman[26815]: 2026-02-01 07:39:50.96553987 +0000 UTC m=+0.165157669 container init 9c8d3ecf4b7691ff3f6e17386e9b700fc03ce3ac68bc634818365f1c2bcf4c0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_noether, release=1764794109, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-12-08T17:28:53Z, architecture=x86_64, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 02:39:50 localhost podman[26815]: 2026-02-01 07:39:50.976825051 +0000 UTC m=+0.176442840 container start 9c8d3ecf4b7691ff3f6e17386e9b700fc03ce3ac68bc634818365f1c2bcf4c0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_noether, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-type=git, release=1764794109, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, name=rhceph) Feb 1 02:39:50 localhost podman[26815]: 2026-02-01 07:39:50.977104949 +0000 UTC m=+0.176722778 container attach 9c8d3ecf4b7691ff3f6e17386e9b700fc03ce3ac68bc634818365f1c2bcf4c0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_noether, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, release=1764794109) Feb 1 02:39:51 localhost systemd[1]: var-lib-containers-storage-overlay-7503a611394e34ea27df147f3929c32dcbe9bb686ca920a835ff327e8ebae175-merged.mount: Deactivated successfully. Feb 1 02:39:51 localhost flamboyant_noether[26831]: [ Feb 1 02:39:51 localhost flamboyant_noether[26831]: { Feb 1 02:39:51 localhost flamboyant_noether[26831]: "available": false, Feb 1 02:39:51 localhost flamboyant_noether[26831]: "ceph_device": false, Feb 1 02:39:51 localhost flamboyant_noether[26831]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "lsm_data": {}, Feb 1 02:39:51 localhost flamboyant_noether[26831]: "lvs": [], Feb 1 02:39:51 localhost flamboyant_noether[26831]: "path": "/dev/sr0", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "rejected_reasons": [ Feb 1 02:39:51 localhost flamboyant_noether[26831]: "Insufficient space (<5GB)", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "Has a FileSystem" Feb 1 02:39:51 localhost flamboyant_noether[26831]: ], Feb 1 02:39:51 localhost flamboyant_noether[26831]: "sys_api": { Feb 1 02:39:51 localhost flamboyant_noether[26831]: "actuators": null, Feb 1 02:39:51 localhost flamboyant_noether[26831]: "device_nodes": "sr0", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "human_readable_size": "482.00 KB", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "id_bus": "ata", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "model": "QEMU DVD-ROM", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "nr_requests": "2", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "partitions": {}, Feb 1 02:39:51 localhost flamboyant_noether[26831]: "path": "/dev/sr0", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "removable": "1", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "rev": "2.5+", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "ro": "0", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "rotational": "1", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "sas_address": "", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "sas_device_handle": "", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "scheduler_mode": "mq-deadline", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "sectors": 0, Feb 1 02:39:51 localhost flamboyant_noether[26831]: "sectorsize": "2048", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "size": 493568.0, Feb 1 02:39:51 localhost flamboyant_noether[26831]: "support_discard": "0", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "type": "disk", Feb 1 02:39:51 localhost flamboyant_noether[26831]: "vendor": "QEMU" Feb 1 02:39:51 localhost flamboyant_noether[26831]: } Feb 1 02:39:51 localhost flamboyant_noether[26831]: } Feb 1 02:39:51 localhost flamboyant_noether[26831]: ] Feb 1 02:39:51 localhost systemd[1]: libpod-9c8d3ecf4b7691ff3f6e17386e9b700fc03ce3ac68bc634818365f1c2bcf4c0e.scope: Deactivated successfully. Feb 1 02:39:51 localhost podman[26815]: 2026-02-01 07:39:51.758557548 +0000 UTC m=+0.958175337 container died 9c8d3ecf4b7691ff3f6e17386e9b700fc03ce3ac68bc634818365f1c2bcf4c0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_noether, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, architecture=x86_64, name=rhceph, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 02:39:51 localhost systemd[1]: tmp-crun.q1Wjcr.mount: Deactivated successfully. Feb 1 02:39:51 localhost systemd[1]: var-lib-containers-storage-overlay-4fec8d8a73a794d0b0aed5d06b9a5c8b7e21c597c0eef0f1ee2709d3c9c171f6-merged.mount: Deactivated successfully. Feb 1 02:39:51 localhost podman[27960]: 2026-02-01 07:39:51.85861105 +0000 UTC m=+0.085285514 container remove 9c8d3ecf4b7691ff3f6e17386e9b700fc03ce3ac68bc634818365f1c2bcf4c0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_noether, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, name=rhceph, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, release=1764794109, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 02:39:51 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:39:51 localhost systemd[1]: libpod-conmon-9c8d3ecf4b7691ff3f6e17386e9b700fc03ce3ac68bc634818365f1c2bcf4c0e.scope: Deactivated successfully. Feb 1 02:39:52 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Feb 1 02:39:52 localhost systemd[1]: Closed Process Core Dump Socket. Feb 1 02:39:52 localhost systemd[1]: Stopping Process Core Dump Socket... Feb 1 02:39:52 localhost systemd[1]: Listening on Process Core Dump Socket. Feb 1 02:39:52 localhost systemd[1]: Reloading. Feb 1 02:39:52 localhost systemd-sysv-generator[28047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:39:52 localhost systemd-rc-local-generator[28041]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:39:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:39:52 localhost systemd[1]: Reloading. Feb 1 02:39:52 localhost systemd-rc-local-generator[28082]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:39:52 localhost systemd-sysv-generator[28085]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:39:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:22 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:40:22 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:40:22 localhost podman[28298]: Feb 1 02:40:23 localhost podman[28298]: 2026-02-01 07:40:22.933368897 +0000 UTC m=+0.043589909 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:24 localhost podman[28298]: 2026-02-01 07:40:24.10950001 +0000 UTC m=+1.219721042 container create 89517299267c8f4f95cca75ab569851b82ed78796056d4787362eb51168d6ef4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_taussig, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , vcs-type=git, ceph=True) Feb 1 02:40:24 localhost systemd[1]: Started libpod-conmon-89517299267c8f4f95cca75ab569851b82ed78796056d4787362eb51168d6ef4.scope. Feb 1 02:40:24 localhost systemd[1]: Started libcrun container. Feb 1 02:40:24 localhost podman[28298]: 2026-02-01 07:40:24.18695156 +0000 UTC m=+1.297172542 container init 89517299267c8f4f95cca75ab569851b82ed78796056d4787362eb51168d6ef4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_taussig, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , ceph=True, build-date=2025-12-08T17:28:53Z, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=rhceph-container) Feb 1 02:40:24 localhost podman[28298]: 2026-02-01 07:40:24.197260389 +0000 UTC m=+1.307481361 container start 89517299267c8f4f95cca75ab569851b82ed78796056d4787362eb51168d6ef4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_taussig, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., release=1764794109) Feb 1 02:40:24 localhost podman[28298]: 2026-02-01 07:40:24.197463544 +0000 UTC m=+1.307684526 container attach 89517299267c8f4f95cca75ab569851b82ed78796056d4787362eb51168d6ef4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_taussig, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.expose-services=, release=1764794109, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 02:40:24 localhost awesome_taussig[28388]: 167 167 Feb 1 02:40:24 localhost systemd[1]: libpod-89517299267c8f4f95cca75ab569851b82ed78796056d4787362eb51168d6ef4.scope: Deactivated successfully. Feb 1 02:40:24 localhost podman[28298]: 2026-02-01 07:40:24.20199713 +0000 UTC m=+1.312218172 container died 89517299267c8f4f95cca75ab569851b82ed78796056d4787362eb51168d6ef4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_taussig, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., release=1764794109, maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 02:40:24 localhost podman[28393]: 2026-02-01 07:40:24.323390266 +0000 UTC m=+0.109056354 container remove 89517299267c8f4f95cca75ab569851b82ed78796056d4787362eb51168d6ef4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_taussig, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, release=1764794109, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 02:40:24 localhost systemd[1]: libpod-conmon-89517299267c8f4f95cca75ab569851b82ed78796056d4787362eb51168d6ef4.scope: Deactivated successfully. Feb 1 02:40:24 localhost systemd[1]: Reloading. Feb 1 02:40:24 localhost systemd-rc-local-generator[28428]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:40:24 localhost systemd-sysv-generator[28435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:40:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:24 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:40:24 localhost systemd[1]: var-lib-containers-storage-overlay-2ade20519be1e549b1bf6510f51794d7b717c89e8519849b0c372d7baa735f12-merged.mount: Deactivated successfully. Feb 1 02:40:24 localhost systemd[1]: Reloading. Feb 1 02:40:24 localhost systemd-sysv-generator[28474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:40:24 localhost systemd-rc-local-generator[28469]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:40:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:24 localhost systemd[1]: Reached target All Ceph clusters and services. Feb 1 02:40:24 localhost systemd[1]: Reloading. Feb 1 02:40:24 localhost systemd-rc-local-generator[28507]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:40:24 localhost systemd-sysv-generator[28513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:40:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:25 localhost systemd[1]: Reached target Ceph cluster 33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 02:40:25 localhost systemd[1]: Reloading. Feb 1 02:40:25 localhost systemd-rc-local-generator[28545]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:40:25 localhost systemd-sysv-generator[28552]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:40:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:25 localhost systemd[1]: Reloading. Feb 1 02:40:25 localhost systemd-sysv-generator[28592]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:40:25 localhost systemd-rc-local-generator[28588]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:40:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:25 localhost systemd[1]: Created slice Slice /system/ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 02:40:25 localhost systemd[1]: Reached target System Time Set. Feb 1 02:40:25 localhost systemd[1]: Reached target System Time Synchronized. Feb 1 02:40:25 localhost systemd[1]: Starting Ceph crash.np0005604215 for 33fac0b9-80c7-560f-918a-c92d3021ca1e... Feb 1 02:40:25 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:40:25 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 1 02:40:25 localhost podman[28651]: Feb 1 02:40:25 localhost podman[28651]: 2026-02-01 07:40:25.924768183 +0000 UTC m=+0.074047167 container create 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, com.redhat.component=rhceph-container) Feb 1 02:40:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9833aab783ba439bc7c83ce1c86d58b1573d474a692c843b2890ed6efebe973d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9833aab783ba439bc7c83ce1c86d58b1573d474a692c843b2890ed6efebe973d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:25 localhost podman[28651]: 2026-02-01 07:40:25.893168971 +0000 UTC m=+0.042447945 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9833aab783ba439bc7c83ce1c86d58b1573d474a692c843b2890ed6efebe973d/merged/etc/ceph/ceph.client.crash.np0005604215.keyring supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:26 localhost podman[28651]: 2026-02-01 07:40:26.021597525 +0000 UTC m=+0.170876500 container init 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, ceph=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 02:40:26 localhost podman[28651]: 2026-02-01 07:40:26.03119594 +0000 UTC m=+0.180474924 container start 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, release=1764794109, ceph=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 02:40:26 localhost bash[28651]: 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a Feb 1 02:40:26 localhost systemd[1]: Started Ceph crash.np0005604215 for 33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: INFO:ceph-crash:pinging cluster to exercise our key Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: 2026-02-01T07:40:26.212+0000 7f67d327d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: 2026-02-01T07:40:26.212+0000 7f67d327d640 -1 AuthRegistry(0x7f67cc0680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: 2026-02-01T07:40:26.213+0000 7f67d327d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: 2026-02-01T07:40:26.213+0000 7f67d327d640 -1 AuthRegistry(0x7f67d327c000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: 2026-02-01T07:40:26.219+0000 7f67d17f3640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: 2026-02-01T07:40:26.222+0000 7f67cbfff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: 2026-02-01T07:40:26.222+0000 7f67d0ff2640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: 2026-02-01T07:40:26.222+0000 7f67d327d640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: [errno 13] RADOS permission denied (error connecting to the cluster) Feb 1 02:40:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215[28665]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Feb 1 02:40:29 localhost podman[28751]: Feb 1 02:40:29 localhost podman[28751]: 2026-02-01 07:40:29.612532199 +0000 UTC m=+0.069893679 container create 51b8b6028dfc37f3aae75102ad31d8ce8ca85383535d7e588666e17198122082 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_robinson, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-type=git, release=1764794109, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 02:40:29 localhost systemd[1]: Started libpod-conmon-51b8b6028dfc37f3aae75102ad31d8ce8ca85383535d7e588666e17198122082.scope. Feb 1 02:40:29 localhost systemd[1]: Started libcrun container. Feb 1 02:40:29 localhost podman[28751]: 2026-02-01 07:40:29.582780845 +0000 UTC m=+0.040142315 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:29 localhost podman[28751]: 2026-02-01 07:40:29.693266647 +0000 UTC m=+0.150628097 container init 51b8b6028dfc37f3aae75102ad31d8ce8ca85383535d7e588666e17198122082 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_robinson, architecture=x86_64, io.buildah.version=1.41.4, name=rhceph, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, version=7, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.openshift.expose-services=, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 02:40:29 localhost podman[28751]: 2026-02-01 07:40:29.701595735 +0000 UTC m=+0.158957205 container start 51b8b6028dfc37f3aae75102ad31d8ce8ca85383535d7e588666e17198122082 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_robinson, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 02:40:29 localhost podman[28751]: 2026-02-01 07:40:29.701978673 +0000 UTC m=+0.159340123 container attach 51b8b6028dfc37f3aae75102ad31d8ce8ca85383535d7e588666e17198122082 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_robinson, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, io.buildah.version=1.41.4, name=rhceph, version=7, architecture=x86_64) Feb 1 02:40:29 localhost systemd[1]: libpod-51b8b6028dfc37f3aae75102ad31d8ce8ca85383535d7e588666e17198122082.scope: Deactivated successfully. Feb 1 02:40:29 localhost trusting_robinson[28766]: 167 167 Feb 1 02:40:29 localhost podman[28751]: 2026-02-01 07:40:29.707154403 +0000 UTC m=+0.164515883 container died 51b8b6028dfc37f3aae75102ad31d8ce8ca85383535d7e588666e17198122082 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_robinson, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, architecture=x86_64, io.buildah.version=1.41.4, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Feb 1 02:40:29 localhost systemd[1]: var-lib-containers-storage-overlay-bf96dff0a303ec2bd9e9f1029a08b37c06f63982b37c93871fb58c48e1052245-merged.mount: Deactivated successfully. Feb 1 02:40:29 localhost podman[28771]: 2026-02-01 07:40:29.795918964 +0000 UTC m=+0.075747904 container remove 51b8b6028dfc37f3aae75102ad31d8ce8ca85383535d7e588666e17198122082 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_robinson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 02:40:29 localhost systemd[1]: libpod-conmon-51b8b6028dfc37f3aae75102ad31d8ce8ca85383535d7e588666e17198122082.scope: Deactivated successfully. Feb 1 02:40:30 localhost podman[28790]: Feb 1 02:40:30 localhost podman[28790]: 2026-02-01 07:40:30.020898804 +0000 UTC m=+0.079912662 container create 25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_hawking, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True) Feb 1 02:40:30 localhost systemd[1]: Started libpod-conmon-25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5.scope. Feb 1 02:40:30 localhost systemd[1]: Started libcrun container. Feb 1 02:40:30 localhost podman[28790]: 2026-02-01 07:40:29.988940104 +0000 UTC m=+0.047953982 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e0fc0a50119b5077adde57cc05c20d078f6b94a4ff9d6045f6165f5cc91c2c/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e0fc0a50119b5077adde57cc05c20d078f6b94a4ff9d6045f6165f5cc91c2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e0fc0a50119b5077adde57cc05c20d078f6b94a4ff9d6045f6165f5cc91c2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e0fc0a50119b5077adde57cc05c20d078f6b94a4ff9d6045f6165f5cc91c2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e0fc0a50119b5077adde57cc05c20d078f6b94a4ff9d6045f6165f5cc91c2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:30 localhost podman[28790]: 2026-02-01 07:40:30.148310517 +0000 UTC m=+0.207324365 container init 25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_hawking, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, GIT_CLEAN=True, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public) Feb 1 02:40:30 localhost podman[28790]: 2026-02-01 07:40:30.1606584 +0000 UTC m=+0.219672248 container start 25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_hawking, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 02:40:30 localhost podman[28790]: 2026-02-01 07:40:30.161053898 +0000 UTC m=+0.220067796 container attach 25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_hawking, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-12-08T17:28:53Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, name=rhceph, version=7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_CLEAN=True) Feb 1 02:40:30 localhost infallible_hawking[28806]: --> passed data devices: 0 physical, 2 LVM Feb 1 02:40:30 localhost infallible_hawking[28806]: --> relative data size: 1.0 Feb 1 02:40:30 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 1 02:40:30 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 91738c8a-fd02-4668-b2ac-8ebbd36126da Feb 1 02:40:31 localhost lvm[28860]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 1 02:40:31 localhost lvm[28860]: VG ceph_vg0 finished Feb 1 02:40:31 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 1 02:40:31 localhost infallible_hawking[28806]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2 Feb 1 02:40:31 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Feb 1 02:40:31 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 1 02:40:31 localhost infallible_hawking[28806]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:31 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap Feb 1 02:40:31 localhost infallible_hawking[28806]: stderr: got monmap epoch 3 Feb 1 02:40:31 localhost infallible_hawking[28806]: --> Creating keyring file for osd.2 Feb 1 02:40:31 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring Feb 1 02:40:31 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/ Feb 1 02:40:31 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 91738c8a-fd02-4668-b2ac-8ebbd36126da --setuser ceph --setgroup ceph Feb 1 02:40:34 localhost infallible_hawking[28806]: stderr: 2026-02-01T07:40:31.855+0000 7f7654b99a80 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Feb 1 02:40:34 localhost infallible_hawking[28806]: stderr: 2026-02-01T07:40:31.855+0000 7f7654b99a80 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid Feb 1 02:40:34 localhost infallible_hawking[28806]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Feb 1 02:40:34 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 1 02:40:34 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config Feb 1 02:40:34 localhost infallible_hawking[28806]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:34 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:34 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 1 02:40:34 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 1 02:40:34 localhost infallible_hawking[28806]: --> ceph-volume lvm activate successful for osd ID: 2 Feb 1 02:40:34 localhost infallible_hawking[28806]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Feb 1 02:40:34 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 1 02:40:34 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new dc0298a4-c2cb-4512-baf8-45dcc8aa1439 Feb 1 02:40:35 localhost lvm[29789]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 1 02:40:35 localhost lvm[29789]: VG ceph_vg1 finished Feb 1 02:40:35 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 1 02:40:35 localhost infallible_hawking[28806]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-5 Feb 1 02:40:35 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Feb 1 02:40:35 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 1 02:40:35 localhost infallible_hawking[28806]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:35 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-5/activate.monmap Feb 1 02:40:35 localhost infallible_hawking[28806]: stderr: got monmap epoch 3 Feb 1 02:40:35 localhost infallible_hawking[28806]: --> Creating keyring file for osd.5 Feb 1 02:40:35 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/keyring Feb 1 02:40:35 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/ Feb 1 02:40:35 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 5 --monmap /var/lib/ceph/osd/ceph-5/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-5/ --osd-uuid dc0298a4-c2cb-4512-baf8-45dcc8aa1439 --setuser ceph --setgroup ceph Feb 1 02:40:38 localhost infallible_hawking[28806]: stderr: 2026-02-01T07:40:35.585+0000 7f6719e42a80 -1 bluestore(/var/lib/ceph/osd/ceph-5//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Feb 1 02:40:38 localhost infallible_hawking[28806]: stderr: 2026-02-01T07:40:35.585+0000 7f6719e42a80 -1 bluestore(/var/lib/ceph/osd/ceph-5/) _read_fsid unparsable uuid Feb 1 02:40:38 localhost infallible_hawking[28806]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Feb 1 02:40:38 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 1 02:40:38 localhost infallible_hawking[28806]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-5 --no-mon-config Feb 1 02:40:38 localhost infallible_hawking[28806]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:38 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:38 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 1 02:40:38 localhost infallible_hawking[28806]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 1 02:40:38 localhost infallible_hawking[28806]: --> ceph-volume lvm activate successful for osd ID: 5 Feb 1 02:40:38 localhost infallible_hawking[28806]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Feb 1 02:40:38 localhost systemd[1]: libpod-25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5.scope: Deactivated successfully. Feb 1 02:40:38 localhost systemd[1]: libpod-25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5.scope: Consumed 3.593s CPU time. Feb 1 02:40:38 localhost podman[30688]: 2026-02-01 07:40:38.273698581 +0000 UTC m=+0.036971098 container died 25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_hawking, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, release=1764794109, description=Red Hat Ceph Storage 7) Feb 1 02:40:38 localhost systemd[1]: tmp-crun.9mlC9o.mount: Deactivated successfully. Feb 1 02:40:38 localhost systemd[1]: var-lib-containers-storage-overlay-b4e0fc0a50119b5077adde57cc05c20d078f6b94a4ff9d6045f6165f5cc91c2c-merged.mount: Deactivated successfully. Feb 1 02:40:38 localhost podman[30688]: 2026-02-01 07:40:38.307953171 +0000 UTC m=+0.071225658 container remove 25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_hawking, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 02:40:38 localhost systemd[1]: libpod-conmon-25e11df276a7e1b9b21a679a6d97c52bf7e99972a6b2ce88a484818c6ee875b5.scope: Deactivated successfully. Feb 1 02:40:39 localhost podman[30771]: Feb 1 02:40:39 localhost podman[30771]: 2026-02-01 07:40:39.027922512 +0000 UTC m=+0.065601599 container create 4147ab26c27bf4a62f9072d7edda6626c8fb328ea7cb47a3dc70d4c6a114d664 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_moore, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 02:40:39 localhost systemd[1]: Started libpod-conmon-4147ab26c27bf4a62f9072d7edda6626c8fb328ea7cb47a3dc70d4c6a114d664.scope. Feb 1 02:40:39 localhost systemd[1]: Started libcrun container. Feb 1 02:40:39 localhost podman[30771]: 2026-02-01 07:40:39.097544763 +0000 UTC m=+0.135223850 container init 4147ab26c27bf4a62f9072d7edda6626c8fb328ea7cb47a3dc70d4c6a114d664 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_moore, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, release=1764794109, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7) Feb 1 02:40:39 localhost podman[30771]: 2026-02-01 07:40:39.000228162 +0000 UTC m=+0.037907229 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:39 localhost podman[30771]: 2026-02-01 07:40:39.106419073 +0000 UTC m=+0.144098150 container start 4147ab26c27bf4a62f9072d7edda6626c8fb328ea7cb47a3dc70d4c6a114d664 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_moore, GIT_BRANCH=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public) Feb 1 02:40:39 localhost podman[30771]: 2026-02-01 07:40:39.106667128 +0000 UTC m=+0.144346205 container attach 4147ab26c27bf4a62f9072d7edda6626c8fb328ea7cb47a3dc70d4c6a114d664 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_moore, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, name=rhceph, release=1764794109, maintainer=Guillaume Abrioux , ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True) Feb 1 02:40:39 localhost charming_moore[30787]: 167 167 Feb 1 02:40:39 localhost systemd[1]: libpod-4147ab26c27bf4a62f9072d7edda6626c8fb328ea7cb47a3dc70d4c6a114d664.scope: Deactivated successfully. Feb 1 02:40:39 localhost podman[30771]: 2026-02-01 07:40:39.109462558 +0000 UTC m=+0.147141665 container died 4147ab26c27bf4a62f9072d7edda6626c8fb328ea7cb47a3dc70d4c6a114d664 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_moore, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, version=7, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 02:40:39 localhost podman[30792]: 2026-02-01 07:40:39.193221001 +0000 UTC m=+0.075138331 container remove 4147ab26c27bf4a62f9072d7edda6626c8fb328ea7cb47a3dc70d4c6a114d664 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_moore, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, release=1764794109, ceph=True, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 02:40:39 localhost systemd[1]: libpod-conmon-4147ab26c27bf4a62f9072d7edda6626c8fb328ea7cb47a3dc70d4c6a114d664.scope: Deactivated successfully. Feb 1 02:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-e00815c0359ba2efc37bc5e28b0a90648ba1a9b71983ed0b85748cf45629ce98-merged.mount: Deactivated successfully. Feb 1 02:40:39 localhost podman[30813]: Feb 1 02:40:39 localhost podman[30813]: 2026-02-01 07:40:39.384612306 +0000 UTC m=+0.062254366 container create f99117263fe13839efa3ef74150794d4d78abae92cf8bc9d820d384b46e7d069 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_banach, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 02:40:39 localhost systemd[1]: Started libpod-conmon-f99117263fe13839efa3ef74150794d4d78abae92cf8bc9d820d384b46e7d069.scope. Feb 1 02:40:39 localhost systemd[1]: Started libcrun container. Feb 1 02:40:39 localhost podman[30813]: 2026-02-01 07:40:39.363867324 +0000 UTC m=+0.041509374 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a094506de0ce8b7e95177f71b78b01da7419612a29aac5bd02f320116ad65da/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a094506de0ce8b7e95177f71b78b01da7419612a29aac5bd02f320116ad65da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a094506de0ce8b7e95177f71b78b01da7419612a29aac5bd02f320116ad65da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:39 localhost podman[30813]: 2026-02-01 07:40:39.505282136 +0000 UTC m=+0.182924196 container init f99117263fe13839efa3ef74150794d4d78abae92cf8bc9d820d384b46e7d069 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_banach, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, distribution-scope=public, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, release=1764794109, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 02:40:39 localhost podman[30813]: 2026-02-01 07:40:39.517887003 +0000 UTC m=+0.195529063 container start f99117263fe13839efa3ef74150794d4d78abae92cf8bc9d820d384b46e7d069 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_banach, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, ceph=True, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, CEPH_POINT_RELEASE=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 02:40:39 localhost podman[30813]: 2026-02-01 07:40:39.518221942 +0000 UTC m=+0.195864002 container attach f99117263fe13839efa3ef74150794d4d78abae92cf8bc9d820d384b46e7d069 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_banach, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, RELEASE=main, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc.) Feb 1 02:40:39 localhost jolly_banach[30828]: { Feb 1 02:40:39 localhost jolly_banach[30828]: "2": [ Feb 1 02:40:39 localhost jolly_banach[30828]: { Feb 1 02:40:39 localhost jolly_banach[30828]: "devices": [ Feb 1 02:40:39 localhost jolly_banach[30828]: "/dev/loop3" Feb 1 02:40:39 localhost jolly_banach[30828]: ], Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_name": "ceph_lv0", Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_size": "7511998464", Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=hPFg2o-8f7Z-SgAH-W30q-gPcL-92rt-G22z3d,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33fac0b9-80c7-560f-918a-c92d3021ca1e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91738c8a-fd02-4668-b2ac-8ebbd36126da,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_uuid": "hPFg2o-8f7Z-SgAH-W30q-gPcL-92rt-G22z3d", Feb 1 02:40:39 localhost jolly_banach[30828]: "name": "ceph_lv0", Feb 1 02:40:39 localhost jolly_banach[30828]: "path": "/dev/ceph_vg0/ceph_lv0", Feb 1 02:40:39 localhost jolly_banach[30828]: "tags": { Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.block_uuid": "hPFg2o-8f7Z-SgAH-W30q-gPcL-92rt-G22z3d", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.cephx_lockbox_secret": "", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.cluster_fsid": "33fac0b9-80c7-560f-918a-c92d3021ca1e", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.cluster_name": "ceph", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.crush_device_class": "", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.encrypted": "0", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.osd_fsid": "91738c8a-fd02-4668-b2ac-8ebbd36126da", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.osd_id": "2", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.osdspec_affinity": "default_drive_group", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.type": "block", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.vdo": "0" Feb 1 02:40:39 localhost jolly_banach[30828]: }, Feb 1 02:40:39 localhost jolly_banach[30828]: "type": "block", Feb 1 02:40:39 localhost jolly_banach[30828]: "vg_name": "ceph_vg0" Feb 1 02:40:39 localhost jolly_banach[30828]: } Feb 1 02:40:39 localhost jolly_banach[30828]: ], Feb 1 02:40:39 localhost jolly_banach[30828]: "5": [ Feb 1 02:40:39 localhost jolly_banach[30828]: { Feb 1 02:40:39 localhost jolly_banach[30828]: "devices": [ Feb 1 02:40:39 localhost jolly_banach[30828]: "/dev/loop4" Feb 1 02:40:39 localhost jolly_banach[30828]: ], Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_name": "ceph_lv1", Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_size": "7511998464", Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dW1DpF-krs4-WN69-efR8-xEtz-9YoL-U2ay9d,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33fac0b9-80c7-560f-918a-c92d3021ca1e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dc0298a4-c2cb-4512-baf8-45dcc8aa1439,ceph.osd_id=5,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Feb 1 02:40:39 localhost jolly_banach[30828]: "lv_uuid": "dW1DpF-krs4-WN69-efR8-xEtz-9YoL-U2ay9d", Feb 1 02:40:39 localhost jolly_banach[30828]: "name": "ceph_lv1", Feb 1 02:40:39 localhost jolly_banach[30828]: "path": "/dev/ceph_vg1/ceph_lv1", Feb 1 02:40:39 localhost jolly_banach[30828]: "tags": { Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.block_uuid": "dW1DpF-krs4-WN69-efR8-xEtz-9YoL-U2ay9d", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.cephx_lockbox_secret": "", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.cluster_fsid": "33fac0b9-80c7-560f-918a-c92d3021ca1e", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.cluster_name": "ceph", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.crush_device_class": "", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.encrypted": "0", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.osd_fsid": "dc0298a4-c2cb-4512-baf8-45dcc8aa1439", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.osd_id": "5", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.osdspec_affinity": "default_drive_group", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.type": "block", Feb 1 02:40:39 localhost jolly_banach[30828]: "ceph.vdo": "0" Feb 1 02:40:39 localhost jolly_banach[30828]: }, Feb 1 02:40:39 localhost jolly_banach[30828]: "type": "block", Feb 1 02:40:39 localhost jolly_banach[30828]: "vg_name": "ceph_vg1" Feb 1 02:40:39 localhost jolly_banach[30828]: } Feb 1 02:40:39 localhost jolly_banach[30828]: ] Feb 1 02:40:39 localhost jolly_banach[30828]: } Feb 1 02:40:39 localhost systemd[1]: libpod-f99117263fe13839efa3ef74150794d4d78abae92cf8bc9d820d384b46e7d069.scope: Deactivated successfully. Feb 1 02:40:39 localhost podman[30813]: 2026-02-01 07:40:39.874528768 +0000 UTC m=+0.552170858 container died f99117263fe13839efa3ef74150794d4d78abae92cf8bc9d820d384b46e7d069 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_banach, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 02:40:39 localhost podman[30837]: 2026-02-01 07:40:39.957964445 +0000 UTC m=+0.071494294 container remove f99117263fe13839efa3ef74150794d4d78abae92cf8bc9d820d384b46e7d069 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_banach, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 02:40:39 localhost systemd[1]: libpod-conmon-f99117263fe13839efa3ef74150794d4d78abae92cf8bc9d820d384b46e7d069.scope: Deactivated successfully. Feb 1 02:40:40 localhost systemd[1]: var-lib-containers-storage-overlay-6a094506de0ce8b7e95177f71b78b01da7419612a29aac5bd02f320116ad65da-merged.mount: Deactivated successfully. Feb 1 02:40:40 localhost podman[30921]: Feb 1 02:40:40 localhost podman[30921]: 2026-02-01 07:40:40.751282897 +0000 UTC m=+0.068571482 container create bd14782795be0cea33ff7054ae78181cb5c0915fbd4277e095800df3588f6ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_solomon, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., ceph=True, version=7, maintainer=Guillaume Abrioux ) Feb 1 02:40:40 localhost systemd[1]: Started libpod-conmon-bd14782795be0cea33ff7054ae78181cb5c0915fbd4277e095800df3588f6ca8.scope. Feb 1 02:40:40 localhost systemd[1]: Started libcrun container. Feb 1 02:40:40 localhost podman[30921]: 2026-02-01 07:40:40.820839348 +0000 UTC m=+0.138127943 container init bd14782795be0cea33ff7054ae78181cb5c0915fbd4277e095800df3588f6ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_solomon, vcs-type=git, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7) Feb 1 02:40:40 localhost podman[30921]: 2026-02-01 07:40:40.721982843 +0000 UTC m=+0.039271418 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:40 localhost podman[30921]: 2026-02-01 07:40:40.831694418 +0000 UTC m=+0.148982993 container start bd14782795be0cea33ff7054ae78181cb5c0915fbd4277e095800df3588f6ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_solomon, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, release=1764794109, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=) Feb 1 02:40:40 localhost podman[30921]: 2026-02-01 07:40:40.831988386 +0000 UTC m=+0.149276971 container attach bd14782795be0cea33ff7054ae78181cb5c0915fbd4277e095800df3588f6ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_solomon, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, release=1764794109, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 02:40:40 localhost eloquent_solomon[30936]: 167 167 Feb 1 02:40:40 localhost systemd[1]: libpod-bd14782795be0cea33ff7054ae78181cb5c0915fbd4277e095800df3588f6ca8.scope: Deactivated successfully. Feb 1 02:40:40 localhost podman[30921]: 2026-02-01 07:40:40.836620674 +0000 UTC m=+0.153909299 container died bd14782795be0cea33ff7054ae78181cb5c0915fbd4277e095800df3588f6ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_solomon, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., ceph=True, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.openshift.tags=rhceph ceph) Feb 1 02:40:40 localhost podman[30941]: 2026-02-01 07:40:40.91626766 +0000 UTC m=+0.064548145 container remove bd14782795be0cea33ff7054ae78181cb5c0915fbd4277e095800df3588f6ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_solomon, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_BRANCH=main, name=rhceph, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 02:40:40 localhost systemd[1]: libpod-conmon-bd14782795be0cea33ff7054ae78181cb5c0915fbd4277e095800df3588f6ca8.scope: Deactivated successfully. Feb 1 02:40:41 localhost podman[30968]: Feb 1 02:40:41 localhost podman[30968]: 2026-02-01 07:40:41.240567675 +0000 UTC m=+0.068945519 container create 6695d7a303520835fbab217669e6ab54d607e306a53b9afb32fcb39c7d0c6773 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.buildah.version=1.41.4, version=7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1764794109, RELEASE=main, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 02:40:41 localhost systemd[1]: Started libpod-conmon-6695d7a303520835fbab217669e6ab54d607e306a53b9afb32fcb39c7d0c6773.scope. Feb 1 02:40:41 localhost systemd[1]: var-lib-containers-storage-overlay-adf03fbc7013ca98f20cde79d1ff9535d9ecfd43d1c920cebf522278aa277f2b-merged.mount: Deactivated successfully. Feb 1 02:40:41 localhost systemd[1]: Started libcrun container. Feb 1 02:40:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559cd5098828c2299c8f2f5f52b7f8312caf53b413ee9deea39b91f1754cab98/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:41 localhost podman[30968]: 2026-02-01 07:40:41.212870966 +0000 UTC m=+0.041248800 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559cd5098828c2299c8f2f5f52b7f8312caf53b413ee9deea39b91f1754cab98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559cd5098828c2299c8f2f5f52b7f8312caf53b413ee9deea39b91f1754cab98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559cd5098828c2299c8f2f5f52b7f8312caf53b413ee9deea39b91f1754cab98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559cd5098828c2299c8f2f5f52b7f8312caf53b413ee9deea39b91f1754cab98/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:41 localhost podman[30968]: 2026-02-01 07:40:41.359656421 +0000 UTC m=+0.188034265 container init 6695d7a303520835fbab217669e6ab54d607e306a53b9afb32fcb39c7d0c6773 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate-test, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , GIT_BRANCH=main, ceph=True) Feb 1 02:40:41 localhost podman[30968]: 2026-02-01 07:40:41.372048184 +0000 UTC m=+0.200426028 container start 6695d7a303520835fbab217669e6ab54d607e306a53b9afb32fcb39c7d0c6773 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_CLEAN=True) Feb 1 02:40:41 localhost podman[30968]: 2026-02-01 07:40:41.372872703 +0000 UTC m=+0.201250537 container attach 6695d7a303520835fbab217669e6ab54d607e306a53b9afb32fcb39c7d0c6773 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate-test, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 02:40:41 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate-test[30983]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Feb 1 02:40:41 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate-test[30983]: [--no-systemd] [--no-tmpfs] Feb 1 02:40:41 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate-test[30983]: ceph-volume activate: error: unrecognized arguments: --bad-option Feb 1 02:40:41 localhost systemd[1]: libpod-6695d7a303520835fbab217669e6ab54d607e306a53b9afb32fcb39c7d0c6773.scope: Deactivated successfully. Feb 1 02:40:41 localhost podman[30968]: 2026-02-01 07:40:41.593561311 +0000 UTC m=+0.421939165 container died 6695d7a303520835fbab217669e6ab54d607e306a53b9afb32fcb39c7d0c6773 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate-test, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 02:40:41 localhost systemd[1]: tmp-crun.JkXOtz.mount: Deactivated successfully. Feb 1 02:40:41 localhost systemd-journald[619]: Field hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Feb 1 02:40:41 localhost systemd-journald[619]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 02:40:41 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 02:40:41 localhost podman[30988]: 2026-02-01 07:40:41.699741372 +0000 UTC m=+0.094508753 container remove 6695d7a303520835fbab217669e6ab54d607e306a53b9afb32fcb39c7d0c6773 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, version=7, distribution-scope=public, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7) Feb 1 02:40:41 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 02:40:41 localhost systemd[1]: libpod-conmon-6695d7a303520835fbab217669e6ab54d607e306a53b9afb32fcb39c7d0c6773.scope: Deactivated successfully. Feb 1 02:40:41 localhost systemd[1]: Reloading. Feb 1 02:40:42 localhost systemd-rc-local-generator[31043]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:40:42 localhost systemd-sysv-generator[31049]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:40:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:42 localhost systemd[1]: var-lib-containers-storage-overlay-559cd5098828c2299c8f2f5f52b7f8312caf53b413ee9deea39b91f1754cab98-merged.mount: Deactivated successfully. Feb 1 02:40:42 localhost systemd[1]: Reloading. Feb 1 02:40:42 localhost systemd-rc-local-generator[31086]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:40:42 localhost systemd-sysv-generator[31092]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:40:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:42 localhost systemd[1]: Starting Ceph osd.2 for 33fac0b9-80c7-560f-918a-c92d3021ca1e... Feb 1 02:40:42 localhost podman[31150]: Feb 1 02:40:42 localhost podman[31150]: 2026-02-01 07:40:42.872360191 +0000 UTC m=+0.072995225 container create 0c42d580c452e4c25df87c3809ff3eed63e65bc71359a31d04a1e996cd523bce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, RELEASE=main, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, release=1764794109, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 02:40:42 localhost systemd[1]: Started libcrun container. Feb 1 02:40:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a694a21110d41d62fd7c847fe6016f6c83340ffb7b2c11388ebbed59edacbc3b/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:42 localhost podman[31150]: 2026-02-01 07:40:42.842110907 +0000 UTC m=+0.042745941 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a694a21110d41d62fd7c847fe6016f6c83340ffb7b2c11388ebbed59edacbc3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a694a21110d41d62fd7c847fe6016f6c83340ffb7b2c11388ebbed59edacbc3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a694a21110d41d62fd7c847fe6016f6c83340ffb7b2c11388ebbed59edacbc3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a694a21110d41d62fd7c847fe6016f6c83340ffb7b2c11388ebbed59edacbc3b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:42 localhost podman[31150]: 2026-02-01 07:40:42.99020101 +0000 UTC m=+0.190836034 container init 0c42d580c452e4c25df87c3809ff3eed63e65bc71359a31d04a1e996cd523bce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate, ceph=True, io.openshift.expose-services=, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.41.4) Feb 1 02:40:42 localhost podman[31150]: 2026-02-01 07:40:42.998945046 +0000 UTC m=+0.199580110 container start 0c42d580c452e4c25df87c3809ff3eed63e65bc71359a31d04a1e996cd523bce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, release=1764794109, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Feb 1 02:40:42 localhost podman[31150]: 2026-02-01 07:40:42.999197772 +0000 UTC m=+0.199832856 container attach 0c42d580c452e4c25df87c3809ff3eed63e65bc71359a31d04a1e996cd523bce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, release=1764794109) Feb 1 02:40:43 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate[31164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 1 02:40:43 localhost bash[31150]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 1 02:40:43 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate[31164]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Feb 1 02:40:43 localhost bash[31150]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Feb 1 02:40:43 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate[31164]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Feb 1 02:40:43 localhost bash[31150]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Feb 1 02:40:43 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate[31164]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 1 02:40:43 localhost bash[31150]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 1 02:40:43 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate[31164]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:43 localhost bash[31150]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:43 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate[31164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 1 02:40:43 localhost bash[31150]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 1 02:40:43 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate[31164]: --> ceph-volume raw activate successful for osd ID: 2 Feb 1 02:40:43 localhost bash[31150]: --> ceph-volume raw activate successful for osd ID: 2 Feb 1 02:40:43 localhost systemd[1]: libpod-0c42d580c452e4c25df87c3809ff3eed63e65bc71359a31d04a1e996cd523bce.scope: Deactivated successfully. Feb 1 02:40:43 localhost podman[31150]: 2026-02-01 07:40:43.699629756 +0000 UTC m=+0.900264810 container died 0c42d580c452e4c25df87c3809ff3eed63e65bc71359a31d04a1e996cd523bce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, architecture=x86_64, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, ceph=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7) Feb 1 02:40:43 localhost systemd[1]: tmp-crun.smkzwY.mount: Deactivated successfully. Feb 1 02:40:43 localhost systemd[1]: var-lib-containers-storage-overlay-a694a21110d41d62fd7c847fe6016f6c83340ffb7b2c11388ebbed59edacbc3b-merged.mount: Deactivated successfully. Feb 1 02:40:43 localhost podman[31278]: 2026-02-01 07:40:43.773779355 +0000 UTC m=+0.067009168 container remove 0c42d580c452e4c25df87c3809ff3eed63e65bc71359a31d04a1e996cd523bce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2-activate, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, release=1764794109, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main) Feb 1 02:40:44 localhost podman[31339]: Feb 1 02:40:44 localhost podman[31339]: 2026-02-01 07:40:44.088843863 +0000 UTC m=+0.074962577 container create 37ee57bb9daae11ee2cb1908eabf1acb4c153c7a836b58f64ca530a5cf6b5994 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4) Feb 1 02:40:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cc5983fa8a46038d31ddaeafcd1fd31857f0ea27c6925ecd079eb9ed30b3af/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:44 localhost podman[31339]: 2026-02-01 07:40:44.059859247 +0000 UTC m=+0.045977971 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cc5983fa8a46038d31ddaeafcd1fd31857f0ea27c6925ecd079eb9ed30b3af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cc5983fa8a46038d31ddaeafcd1fd31857f0ea27c6925ecd079eb9ed30b3af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cc5983fa8a46038d31ddaeafcd1fd31857f0ea27c6925ecd079eb9ed30b3af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cc5983fa8a46038d31ddaeafcd1fd31857f0ea27c6925ecd079eb9ed30b3af/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:44 localhost podman[31339]: 2026-02-01 07:40:44.202406072 +0000 UTC m=+0.188524746 container init 37ee57bb9daae11ee2cb1908eabf1acb4c153c7a836b58f64ca530a5cf6b5994 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2, com.redhat.component=rhceph-container, version=7, ceph=True, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=) Feb 1 02:40:44 localhost podman[31339]: 2026-02-01 07:40:44.209109864 +0000 UTC m=+0.195228548 container start 37ee57bb9daae11ee2cb1908eabf1acb4c153c7a836b58f64ca530a5cf6b5994 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2, build-date=2025-12-08T17:28:53Z, name=rhceph, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 02:40:44 localhost bash[31339]: 37ee57bb9daae11ee2cb1908eabf1acb4c153c7a836b58f64ca530a5cf6b5994 Feb 1 02:40:44 localhost systemd[1]: Started Ceph osd.2 for 33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 02:40:44 localhost ceph-osd[31357]: set uid:gid to 167:167 (ceph:ceph) Feb 1 02:40:44 localhost ceph-osd[31357]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Feb 1 02:40:44 localhost ceph-osd[31357]: pidfile_write: ignore empty --pid-file Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:44 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:44 localhost ceph-osd[31357]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) close Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) close Feb 1 02:40:44 localhost ceph-osd[31357]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal Feb 1 02:40:44 localhost ceph-osd[31357]: load: jerasure load: lrc Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:44 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 1 02:40:44 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) close Feb 1 02:40:44 localhost podman[31451]: Feb 1 02:40:44 localhost podman[31451]: 2026-02-01 07:40:44.984892133 +0000 UTC m=+0.074278123 container create 508d3ad3699183007ffd3c9f0db14b70315b73fc4eaeda0feb1b143690c2da6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_wright, architecture=x86_64, build-date=2025-12-08T17:28:53Z, release=1764794109, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 02:40:45 localhost systemd[1]: Started libpod-conmon-508d3ad3699183007ffd3c9f0db14b70315b73fc4eaeda0feb1b143690c2da6c.scope. Feb 1 02:40:45 localhost systemd[1]: Started libcrun container. Feb 1 02:40:45 localhost podman[31451]: 2026-02-01 07:40:45.050346246 +0000 UTC m=+0.139732246 container init 508d3ad3699183007ffd3c9f0db14b70315b73fc4eaeda0feb1b143690c2da6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_wright, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container) Feb 1 02:40:45 localhost podman[31451]: 2026-02-01 07:40:44.95141417 +0000 UTC m=+0.040800190 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:45 localhost podman[31451]: 2026-02-01 07:40:45.059242197 +0000 UTC m=+0.148628187 container start 508d3ad3699183007ffd3c9f0db14b70315b73fc4eaeda0feb1b143690c2da6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_wright, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-12-08T17:28:53Z, distribution-scope=public, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 02:40:45 localhost podman[31451]: 2026-02-01 07:40:45.059573844 +0000 UTC m=+0.148959874 container attach 508d3ad3699183007ffd3c9f0db14b70315b73fc4eaeda0feb1b143690c2da6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_wright, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, version=7, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, name=rhceph) Feb 1 02:40:45 localhost systemd[1]: libpod-508d3ad3699183007ffd3c9f0db14b70315b73fc4eaeda0feb1b143690c2da6c.scope: Deactivated successfully. Feb 1 02:40:45 localhost upbeat_wright[31467]: 167 167 Feb 1 02:40:45 localhost podman[31451]: 2026-02-01 07:40:45.06364428 +0000 UTC m=+0.153030270 container died 508d3ad3699183007ffd3c9f0db14b70315b73fc4eaeda0feb1b143690c2da6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_wright, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, version=7, io.openshift.expose-services=, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main) Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) close Feb 1 02:40:45 localhost podman[31472]: 2026-02-01 07:40:45.153878711 +0000 UTC m=+0.076175973 container remove 508d3ad3699183007ffd3c9f0db14b70315b73fc4eaeda0feb1b143690c2da6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_wright, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_BRANCH=main, ceph=True, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Feb 1 02:40:45 localhost systemd[1]: libpod-conmon-508d3ad3699183007ffd3c9f0db14b70315b73fc4eaeda0feb1b143690c2da6c.scope: Deactivated successfully. Feb 1 02:40:45 localhost ceph-osd[31357]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Feb 1 02:40:45 localhost ceph-osd[31357]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d2e00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:45 localhost ceph-osd[31357]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Feb 1 02:40:45 localhost ceph-osd[31357]: bluefs mount Feb 1 02:40:45 localhost ceph-osd[31357]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 1 02:40:45 localhost ceph-osd[31357]: bluefs mount shared_bdev_used = 0 Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: RocksDB version: 7.9.2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Git sha 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Compile date 2025-09-23 00:00:00 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: DB SUMMARY Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: DB Session ID: RCWH0G1AUIL75TA504WC Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: CURRENT file: CURRENT Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: IDENTITY file: IDENTITY Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.error_if_exists: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.create_if_missing: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.env: 0x55aabc466bd0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.fs: LegacyFileSystem Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.info_log: 0x55aabd16c780 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_file_opening_threads: 16 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.statistics: (nil) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.use_fsync: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_log_file_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.log_file_time_to_roll: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.keep_log_file_num: 1000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.recycle_log_file_num: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_fallocate: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_mmap_reads: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_mmap_writes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.use_direct_reads: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.create_missing_column_families: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.db_log_dir: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_dir: db.wal Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_cache_numshardbits: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.advise_random_on_open: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.db_write_buffer_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_manager: 0x55aabc1bc140 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.use_adaptive_mutex: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.rate_limiter: (nil) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_recovery_mode: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_thread_tracking: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_pipelined_write: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.unordered_write: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.row_cache: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_ingest_behind: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.two_write_queues: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.manual_wal_flush: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_compression: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.atomic_flush: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.persist_stats_to_disk: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.log_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.best_efforts_recovery: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_data_in_errors: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.db_host_id: __hostname__ Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enforce_single_del_contracts: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_background_jobs: 4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_background_compactions: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_subcompactions: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.delayed_write_rate : 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.stats_dump_period_sec: 600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.stats_persist_period_sec: 600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_open_files: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bytes_per_sync: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_background_flushes: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Compression algorithms supported: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kZSTD supported: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kXpressCompression supported: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kBZip2Compression supported: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kLZ4Compression supported: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kZlibCompression supported: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kSnappyCompression supported: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: DMutex implementation: pthread_mutex_t Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16cb60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16cb60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd16cb60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 84d3485b-52e5-4ebd-8f93-e8edd0678d8b Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931645406026, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931645406363, "job": 1, "event": "recovery_finished"} Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025 Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240 Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000 Feb 1 02:40:45 localhost ceph-osd[31357]: freelist init Feb 1 02:40:45 localhost ceph-osd[31357]: freelist _read_cfg Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Feb 1 02:40:45 localhost ceph-osd[31357]: bluefs umount Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) close Feb 1 02:40:45 localhost podman[31565]: Feb 1 02:40:45 localhost podman[31565]: 2026-02-01 07:40:45.479911364 +0000 UTC m=+0.071748819 container create d2eea971760230ceda92e13a2782d5c9c2f96aa5d54ff1b485c939be8909cfde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate-test, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhceph, release=1764794109, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Feb 1 02:40:45 localhost systemd[1]: Started libpod-conmon-d2eea971760230ceda92e13a2782d5c9c2f96aa5d54ff1b485c939be8909cfde.scope. Feb 1 02:40:45 localhost systemd[1]: Started libcrun container. Feb 1 02:40:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28d353abdd1b6ecd25446a446502c66ed96fbf47eec85309756a0f5b1fd5e2a/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:45 localhost podman[31565]: 2026-02-01 07:40:45.45153377 +0000 UTC m=+0.043371235 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28d353abdd1b6ecd25446a446502c66ed96fbf47eec85309756a0f5b1fd5e2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28d353abdd1b6ecd25446a446502c66ed96fbf47eec85309756a0f5b1fd5e2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28d353abdd1b6ecd25446a446502c66ed96fbf47eec85309756a0f5b1fd5e2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28d353abdd1b6ecd25446a446502c66ed96fbf47eec85309756a0f5b1fd5e2a/merged/var/lib/ceph/osd/ceph-5 supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:45 localhost podman[31565]: 2026-02-01 07:40:45.639307408 +0000 UTC m=+0.231144833 container init d2eea971760230ceda92e13a2782d5c9c2f96aa5d54ff1b485c939be8909cfde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate-test, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, name=rhceph, version=7, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container) Feb 1 02:40:45 localhost podman[31565]: 2026-02-01 07:40:45.650725011 +0000 UTC m=+0.242562466 container start d2eea971760230ceda92e13a2782d5c9c2f96aa5d54ff1b485c939be8909cfde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate-test, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1764794109, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, RELEASE=main, version=7, vcs-type=git, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 02:40:45 localhost podman[31565]: 2026-02-01 07:40:45.650992447 +0000 UTC m=+0.242829872 container attach d2eea971760230ceda92e13a2782d5c9c2f96aa5d54ff1b485c939be8909cfde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate-test, release=1764794109, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7, io.buildah.version=1.41.4, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=) Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 1 02:40:45 localhost ceph-osd[31357]: bdev(0x55aabc1d3180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:45 localhost ceph-osd[31357]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Feb 1 02:40:45 localhost ceph-osd[31357]: bluefs mount Feb 1 02:40:45 localhost ceph-osd[31357]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 1 02:40:45 localhost ceph-osd[31357]: bluefs mount shared_bdev_used = 4718592 Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: RocksDB version: 7.9.2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Git sha 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Compile date 2025-09-23 00:00:00 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: DB SUMMARY Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: DB Session ID: RCWH0G1AUIL75TA504WD Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: CURRENT file: CURRENT Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: IDENTITY file: IDENTITY Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.error_if_exists: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.create_if_missing: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.env: 0x55aabc216460 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.fs: LegacyFileSystem Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.info_log: 0x55aabd1fe380 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_file_opening_threads: 16 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.statistics: (nil) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.use_fsync: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_log_file_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.log_file_time_to_roll: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.keep_log_file_num: 1000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.recycle_log_file_num: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_fallocate: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_mmap_reads: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_mmap_writes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.use_direct_reads: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.create_missing_column_families: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.db_log_dir: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_dir: db.wal Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_cache_numshardbits: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.advise_random_on_open: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.db_write_buffer_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_manager: 0x55aabc1bd5e0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.use_adaptive_mutex: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.rate_limiter: (nil) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_recovery_mode: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_thread_tracking: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_pipelined_write: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.unordered_write: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.row_cache: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_ingest_behind: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.two_write_queues: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.manual_wal_flush: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_compression: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.atomic_flush: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.persist_stats_to_disk: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.log_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.best_efforts_recovery: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.allow_data_in_errors: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.db_host_id: __hostname__ Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enforce_single_del_contracts: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_background_jobs: 4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_background_compactions: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_subcompactions: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.delayed_write_rate : 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.stats_dump_period_sec: 600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.stats_persist_period_sec: 600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_open_files: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bytes_per_sync: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_background_flushes: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Compression algorithms supported: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kZSTD supported: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kXpressCompression supported: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kBZip2Compression supported: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kLZ4Compression supported: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kZlibCompression supported: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: #011kSnappyCompression supported: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: DMutex implementation: pthread_mutex_t Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1fe5e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1fe5e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1fe5e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1fe5e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1fe5e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1fe5e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1fe5e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1aa2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1ff900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1ab610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1ff900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1ab610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.merge_operator: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aabd1ff900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aabc1ab610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression: LZ4 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.num_levels: 7 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 84d3485b-52e5-4ebd-8f93-e8edd0678d8b Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931645691508, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931645697618, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769931645, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "84d3485b-52e5-4ebd-8f93-e8edd0678d8b", "db_session_id": "RCWH0G1AUIL75TA504WD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931645701791, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769931645, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "84d3485b-52e5-4ebd-8f93-e8edd0678d8b", "db_session_id": "RCWH0G1AUIL75TA504WD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931645706026, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769931645, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "84d3485b-52e5-4ebd-8f93-e8edd0678d8b", "db_session_id": "RCWH0G1AUIL75TA504WD", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931645710560, "job": 1, "event": "recovery_finished"} Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55aabd1b0700 Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: DB pointer 0x55aabd0bda00 Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4 Feb 1 02:40:45 localhost ceph-osd[31357]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 02:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aabc1aa2d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aabc1aa2d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012 Feb 1 02:40:45 localhost ceph-osd[31357]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Feb 1 02:40:45 localhost ceph-osd[31357]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Feb 1 02:40:45 localhost ceph-osd[31357]: _get_class not permitted to load lua Feb 1 02:40:45 localhost ceph-osd[31357]: _get_class not permitted to load sdk Feb 1 02:40:45 localhost ceph-osd[31357]: _get_class not permitted to load test_remote_reads Feb 1 02:40:45 localhost ceph-osd[31357]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients Feb 1 02:40:45 localhost ceph-osd[31357]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Feb 1 02:40:45 localhost ceph-osd[31357]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds Feb 1 02:40:45 localhost ceph-osd[31357]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Feb 1 02:40:45 localhost ceph-osd[31357]: osd.2 0 load_pgs Feb 1 02:40:45 localhost ceph-osd[31357]: osd.2 0 load_pgs opened 0 pgs Feb 1 02:40:45 localhost systemd[1]: var-lib-containers-storage-overlay-fac9935f95d33bcb68200c6471366f7f43990c0a87521344e46f17901a168b1d-merged.mount: Deactivated successfully. Feb 1 02:40:45 localhost ceph-osd[31357]: osd.2 0 log_to_monitors true Feb 1 02:40:45 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2[31353]: 2026-02-01T07:40:45.752+0000 7fe9a603da80 -1 osd.2 0 log_to_monitors true Feb 1 02:40:45 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate-test[31716]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Feb 1 02:40:45 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate-test[31716]: [--no-systemd] [--no-tmpfs] Feb 1 02:40:45 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate-test[31716]: ceph-volume activate: error: unrecognized arguments: --bad-option Feb 1 02:40:45 localhost systemd[1]: libpod-d2eea971760230ceda92e13a2782d5c9c2f96aa5d54ff1b485c939be8909cfde.scope: Deactivated successfully. Feb 1 02:40:45 localhost podman[31565]: 2026-02-01 07:40:45.88974824 +0000 UTC m=+0.481585705 container died d2eea971760230ceda92e13a2782d5c9c2f96aa5d54ff1b485c939be8909cfde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate-test, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, architecture=x86_64, RELEASE=main, distribution-scope=public, name=rhceph, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux ) Feb 1 02:40:45 localhost systemd[1]: var-lib-containers-storage-overlay-d28d353abdd1b6ecd25446a446502c66ed96fbf47eec85309756a0f5b1fd5e2a-merged.mount: Deactivated successfully. Feb 1 02:40:45 localhost podman[31936]: 2026-02-01 07:40:45.959558657 +0000 UTC m=+0.061010671 container remove d2eea971760230ceda92e13a2782d5c9c2f96aa5d54ff1b485c939be8909cfde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate-test, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7) Feb 1 02:40:45 localhost systemd[1]: libpod-conmon-d2eea971760230ceda92e13a2782d5c9c2f96aa5d54ff1b485c939be8909cfde.scope: Deactivated successfully. Feb 1 02:40:46 localhost systemd[1]: Reloading. Feb 1 02:40:46 localhost systemd-rc-local-generator[31992]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:40:46 localhost systemd-sysv-generator[31997]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:40:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:46 localhost systemd[1]: Reloading. Feb 1 02:40:46 localhost systemd-sysv-generator[32036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:40:46 localhost systemd-rc-local-generator[32033]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:40:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:40:46 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Feb 1 02:40:46 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Feb 1 02:40:46 localhost systemd[1]: Starting Ceph osd.5 for 33fac0b9-80c7-560f-918a-c92d3021ca1e... Feb 1 02:40:46 localhost ceph-osd[31357]: osd.2 0 done with init, starting boot process Feb 1 02:40:46 localhost ceph-osd[31357]: osd.2 0 start_boot Feb 1 02:40:46 localhost ceph-osd[31357]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1 Feb 1 02:40:46 localhost ceph-osd[31357]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Feb 1 02:40:46 localhost ceph-osd[31357]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Feb 1 02:40:46 localhost ceph-osd[31357]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Feb 1 02:40:46 localhost ceph-osd[31357]: osd.2 0 bench count 12288000 bsize 4 KiB Feb 1 02:40:47 localhost podman[32098]: Feb 1 02:40:47 localhost podman[32098]: 2026-02-01 07:40:47.133301839 +0000 UTC m=+0.078241817 container create 36861a26c3f7b486f33da731f5a5ad899c89c0e4966f5188079bc31335796fe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, com.redhat.component=rhceph-container, RELEASE=main, ceph=True, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public) Feb 1 02:40:47 localhost systemd[1]: Started libcrun container. Feb 1 02:40:47 localhost podman[32098]: 2026-02-01 07:40:47.098935618 +0000 UTC m=+0.043875616 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e1801a4fc2bec716e4d074dd0377cf0c241b68e1f1a98e97d826cb022aeee0/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e1801a4fc2bec716e4d074dd0377cf0c241b68e1f1a98e97d826cb022aeee0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e1801a4fc2bec716e4d074dd0377cf0c241b68e1f1a98e97d826cb022aeee0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e1801a4fc2bec716e4d074dd0377cf0c241b68e1f1a98e97d826cb022aeee0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e1801a4fc2bec716e4d074dd0377cf0c241b68e1f1a98e97d826cb022aeee0/merged/var/lib/ceph/osd/ceph-5 supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:47 localhost podman[32098]: 2026-02-01 07:40:47.271717256 +0000 UTC m=+0.216657234 container init 36861a26c3f7b486f33da731f5a5ad899c89c0e4966f5188079bc31335796fe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, release=1764794109, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Feb 1 02:40:47 localhost podman[32098]: 2026-02-01 07:40:47.295556544 +0000 UTC m=+0.240496532 container start 36861a26c3f7b486f33da731f5a5ad899c89c0e4966f5188079bc31335796fe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, io.buildah.version=1.41.4) Feb 1 02:40:47 localhost podman[32098]: 2026-02-01 07:40:47.295937162 +0000 UTC m=+0.240877180 container attach 36861a26c3f7b486f33da731f5a5ad899c89c0e4966f5188079bc31335796fe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph) Feb 1 02:40:47 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate[32112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 1 02:40:47 localhost bash[32098]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 1 02:40:47 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate[32112]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-5 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Feb 1 02:40:47 localhost bash[32098]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-5 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Feb 1 02:40:47 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate[32112]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Feb 1 02:40:47 localhost bash[32098]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Feb 1 02:40:47 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate[32112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 1 02:40:47 localhost bash[32098]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 1 02:40:47 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate[32112]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:47 localhost bash[32098]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:47 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate[32112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 1 02:40:47 localhost bash[32098]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 1 02:40:47 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate[32112]: --> ceph-volume raw activate successful for osd ID: 5 Feb 1 02:40:47 localhost bash[32098]: --> ceph-volume raw activate successful for osd ID: 5 Feb 1 02:40:47 localhost systemd[1]: libpod-36861a26c3f7b486f33da731f5a5ad899c89c0e4966f5188079bc31335796fe2.scope: Deactivated successfully. Feb 1 02:40:47 localhost podman[32098]: 2026-02-01 07:40:47.98193296 +0000 UTC m=+0.926872938 container died 36861a26c3f7b486f33da731f5a5ad899c89c0e4966f5188079bc31335796fe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 02:40:48 localhost systemd[1]: var-lib-containers-storage-overlay-47e1801a4fc2bec716e4d074dd0377cf0c241b68e1f1a98e97d826cb022aeee0-merged.mount: Deactivated successfully. Feb 1 02:40:48 localhost podman[32239]: 2026-02-01 07:40:48.068420031 +0000 UTC m=+0.080179058 container remove 36861a26c3f7b486f33da731f5a5ad899c89c0e4966f5188079bc31335796fe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5-activate, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, name=rhceph, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 02:40:48 localhost podman[32300]: Feb 1 02:40:48 localhost podman[32300]: 2026-02-01 07:40:48.413378466 +0000 UTC m=+0.079040363 container create 0892bf7f52004db1c7d54184bf179169b4e0add1d0e942ba56af5041d91a8912 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64, version=7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Feb 1 02:40:48 localhost podman[32300]: 2026-02-01 07:40:48.385595555 +0000 UTC m=+0.051257482 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683526f88687e73b27941c5962cadab4561ff25659e6c5d067f7f9f43e332ea5/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683526f88687e73b27941c5962cadab4561ff25659e6c5d067f7f9f43e332ea5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683526f88687e73b27941c5962cadab4561ff25659e6c5d067f7f9f43e332ea5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683526f88687e73b27941c5962cadab4561ff25659e6c5d067f7f9f43e332ea5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683526f88687e73b27941c5962cadab4561ff25659e6c5d067f7f9f43e332ea5/merged/var/lib/ceph/osd/ceph-5 supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:48 localhost podman[32300]: 2026-02-01 07:40:48.563782599 +0000 UTC m=+0.229444496 container init 0892bf7f52004db1c7d54184bf179169b4e0add1d0e942ba56af5041d91a8912 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 02:40:48 localhost systemd[1]: tmp-crun.57R3hN.mount: Deactivated successfully. Feb 1 02:40:48 localhost podman[32300]: 2026-02-01 07:40:48.575864156 +0000 UTC m=+0.241526073 container start 0892bf7f52004db1c7d54184bf179169b4e0add1d0e942ba56af5041d91a8912 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1764794109, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 02:40:48 localhost bash[32300]: 0892bf7f52004db1c7d54184bf179169b4e0add1d0e942ba56af5041d91a8912 Feb 1 02:40:48 localhost systemd[1]: Started Ceph osd.5 for 33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 02:40:48 localhost ceph-osd[32318]: set uid:gid to 167:167 (ceph:ceph) Feb 1 02:40:48 localhost ceph-osd[32318]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Feb 1 02:40:48 localhost ceph-osd[32318]: pidfile_write: ignore empty --pid-file Feb 1 02:40:48 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:48 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 1 02:40:48 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:48 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 1 02:40:48 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:48 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 1 02:40:48 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:48 localhost ceph-osd[32318]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-5/block size 7.0 GiB Feb 1 02:40:48 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) close Feb 1 02:40:48 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) close Feb 1 02:40:49 localhost ceph-osd[32318]: starting osd.5 osd_data /var/lib/ceph/osd/ceph-5 /var/lib/ceph/osd/ceph-5/journal Feb 1 02:40:49 localhost ceph-osd[32318]: load: jerasure load: lrc Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) close Feb 1 02:40:49 localhost podman[32408]: Feb 1 02:40:49 localhost podman[32408]: 2026-02-01 07:40:49.402518298 +0000 UTC m=+0.072171668 container create bc86a720f94d15711eaf46c54d89af48202919088b7b2e02e38bf55deab9fb62 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_chaplygin, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, architecture=x86_64, RELEASE=main, distribution-scope=public, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) close Feb 1 02:40:49 localhost systemd[1]: Started libpod-conmon-bc86a720f94d15711eaf46c54d89af48202919088b7b2e02e38bf55deab9fb62.scope. Feb 1 02:40:49 localhost systemd[1]: Started libcrun container. Feb 1 02:40:49 localhost podman[32408]: 2026-02-01 07:40:49.371767393 +0000 UTC m=+0.041420793 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:49 localhost podman[32408]: 2026-02-01 07:40:49.516427934 +0000 UTC m=+0.186081304 container init bc86a720f94d15711eaf46c54d89af48202919088b7b2e02e38bf55deab9fb62 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_chaplygin, distribution-scope=public, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 02:40:49 localhost systemd[1]: tmp-crun.uSw8jT.mount: Deactivated successfully. Feb 1 02:40:49 localhost podman[32408]: 2026-02-01 07:40:49.53363934 +0000 UTC m=+0.203292730 container start bc86a720f94d15711eaf46c54d89af48202919088b7b2e02e38bf55deab9fb62 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_chaplygin, maintainer=Guillaume Abrioux , ceph=True, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109) Feb 1 02:40:49 localhost podman[32408]: 2026-02-01 07:40:49.534032318 +0000 UTC m=+0.203685688 container attach bc86a720f94d15711eaf46c54d89af48202919088b7b2e02e38bf55deab9fb62 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_chaplygin, io.buildah.version=1.41.4, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, release=1764794109, name=rhceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux ) Feb 1 02:40:49 localhost epic_chaplygin[32428]: 167 167 Feb 1 02:40:49 localhost systemd[1]: libpod-bc86a720f94d15711eaf46c54d89af48202919088b7b2e02e38bf55deab9fb62.scope: Deactivated successfully. Feb 1 02:40:49 localhost podman[32408]: 2026-02-01 07:40:49.538907962 +0000 UTC m=+0.208561332 container died bc86a720f94d15711eaf46c54d89af48202919088b7b2e02e38bf55deab9fb62 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_chaplygin, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 02:40:49 localhost podman[32433]: 2026-02-01 07:40:49.642880176 +0000 UTC m=+0.096048017 container remove bc86a720f94d15711eaf46c54d89af48202919088b7b2e02e38bf55deab9fb62 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_chaplygin, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.41.4, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2025-12-08T17:28:53Z, release=1764794109, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main) Feb 1 02:40:49 localhost systemd[1]: libpod-conmon-bc86a720f94d15711eaf46c54d89af48202919088b7b2e02e38bf55deab9fb62.scope: Deactivated successfully. Feb 1 02:40:49 localhost ceph-osd[32318]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Feb 1 02:40:49 localhost ceph-osd[32318]: osd.5:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee0e00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:49 localhost ceph-osd[32318]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-5/block size 7.0 GiB Feb 1 02:40:49 localhost ceph-osd[32318]: bluefs mount Feb 1 02:40:49 localhost ceph-osd[32318]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 1 02:40:49 localhost ceph-osd[32318]: bluefs mount shared_bdev_used = 0 Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: RocksDB version: 7.9.2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Git sha 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Compile date 2025-09-23 00:00:00 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: DB SUMMARY Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: DB Session ID: 6H3HWCHWKTA3X6X83VNN Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: CURRENT file: CURRENT Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: IDENTITY file: IDENTITY Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.error_if_exists: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.create_if_missing: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.env: 0x55797f174c40 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.fs: LegacyFileSystem Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.info_log: 0x55797fe7c900 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_file_opening_threads: 16 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.statistics: (nil) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.use_fsync: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_log_file_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.log_file_time_to_roll: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.keep_log_file_num: 1000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.recycle_log_file_num: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.allow_fallocate: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.allow_mmap_reads: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.allow_mmap_writes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.use_direct_reads: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.create_missing_column_families: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.db_log_dir: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.wal_dir: db.wal Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_cache_numshardbits: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.advise_random_on_open: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.db_write_buffer_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_manager: 0x55797eeca140 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.use_adaptive_mutex: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.rate_limiter: (nil) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.wal_recovery_mode: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_thread_tracking: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_pipelined_write: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.unordered_write: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.row_cache: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.wal_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.allow_ingest_behind: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.two_write_queues: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.manual_wal_flush: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.wal_compression: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.atomic_flush: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.persist_stats_to_disk: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.log_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.best_efforts_recovery: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.allow_data_in_errors: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.db_host_id: __hostname__ Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enforce_single_del_contracts: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_background_jobs: 4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_background_compactions: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_subcompactions: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.delayed_write_rate : 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.stats_dump_period_sec: 600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.stats_persist_period_sec: 600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_open_files: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bytes_per_sync: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_background_flushes: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Compression algorithms supported: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: #011kZSTD supported: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: #011kXpressCompression supported: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: #011kBZip2Compression supported: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: #011kLZ4Compression supported: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: #011kZlibCompression supported: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: #011kSnappyCompression supported: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: DMutex implementation: pthread_mutex_t Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cac0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[31357]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.069 iops: 6161.642 elapsed_sec: 0.487 Feb 1 02:40:49 localhost ceph-osd[31357]: log_channel(cluster) log [WRN] : OSD bench result of 6161.642331 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Feb 1 02:40:49 localhost ceph-osd[31357]: osd.2 0 waiting for initial osdmap Feb 1 02:40:49 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2[31353]: 2026-02-01T07:40:49.742+0000 7fe9a1fbc640 -1 osd.2 0 waiting for initial osdmap Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cce0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cce0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7cce0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3a89ed51-aa97-4c8c-8741-d02f18e7055a Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931649733052, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931649733299, "job": 1, "event": "recovery_finished"} Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta old nid_max 1025 Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta old blobid_max 10240 Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta min_alloc_size 0x1000 Feb 1 02:40:49 localhost ceph-osd[32318]: freelist init Feb 1 02:40:49 localhost ceph-osd[32318]: freelist _read_cfg Feb 1 02:40:49 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Feb 1 02:40:49 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Feb 1 02:40:49 localhost ceph-osd[32318]: bluefs umount Feb 1 02:40:49 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) close Feb 1 02:40:49 localhost podman[32647]: Feb 1 02:40:49 localhost podman[32647]: 2026-02-01 07:40:49.819791393 +0000 UTC m=+0.042557287 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:50 localhost ceph-osd[31357]: osd.2 11 crush map has features 288514050185494528, adjusting msgr requires for clients Feb 1 02:40:50 localhost ceph-osd[31357]: osd.2 11 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Feb 1 02:40:50 localhost ceph-osd[31357]: osd.2 11 crush map has features 3314932999778484224, adjusting msgr requires for osds Feb 1 02:40:50 localhost ceph-osd[31357]: osd.2 11 check_osdmap_features require_osd_release unknown -> reef Feb 1 02:40:50 localhost podman[32647]: 2026-02-01 07:40:50.15454899 +0000 UTC m=+0.377314914 container create f59b1a2b2f90ec887f5cf9241f24b3e26570eb6a068387300a8a262f58be7674 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_heyrovsky, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, vcs-type=git, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 02:40:50 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-2[31353]: 2026-02-01T07:40:50.160+0000 7fe99d5e6640 -1 osd.2 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 1 02:40:50 localhost ceph-osd[31357]: osd.2 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 1 02:40:50 localhost ceph-osd[31357]: osd.2 11 set_numa_affinity not setting numa affinity Feb 1 02:40:50 localhost ceph-osd[31357]: osd.2 11 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Feb 1 02:40:50 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 1 02:40:50 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 1 02:40:50 localhost ceph-osd[32318]: bdev(0x55797eee1180 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 1 02:40:50 localhost ceph-osd[32318]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-5/block size 7.0 GiB Feb 1 02:40:50 localhost ceph-osd[32318]: bluefs mount Feb 1 02:40:50 localhost ceph-osd[32318]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 1 02:40:50 localhost ceph-osd[32318]: bluefs mount shared_bdev_used = 4718592 Feb 1 02:40:50 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: RocksDB version: 7.9.2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Git sha 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Compile date 2025-09-23 00:00:00 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: DB SUMMARY Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: DB Session ID: 6H3HWCHWKTA3X6X83VNM Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: CURRENT file: CURRENT Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: IDENTITY file: IDENTITY Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.error_if_exists: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.create_if_missing: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.env: 0x55797f006310 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.fs: LegacyFileSystem Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.info_log: 0x55797ef79320 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_file_opening_threads: 16 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.statistics: (nil) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.use_fsync: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_log_file_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.log_file_time_to_roll: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.keep_log_file_num: 1000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.recycle_log_file_num: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.allow_fallocate: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.allow_mmap_reads: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.allow_mmap_writes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.use_direct_reads: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.create_missing_column_families: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.db_log_dir: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.wal_dir: db.wal Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_cache_numshardbits: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.advise_random_on_open: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.db_write_buffer_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_manager: 0x55797eecb540 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.use_adaptive_mutex: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.rate_limiter: (nil) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.wal_recovery_mode: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_thread_tracking: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_pipelined_write: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.unordered_write: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.row_cache: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.wal_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.allow_ingest_behind: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.two_write_queues: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.manual_wal_flush: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.wal_compression: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.atomic_flush: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.persist_stats_to_disk: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.log_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.best_efforts_recovery: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.allow_data_in_errors: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.db_host_id: __hostname__ Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enforce_single_del_contracts: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_background_jobs: 4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_background_compactions: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_subcompactions: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.delayed_write_rate : 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.stats_dump_period_sec: 600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.stats_persist_period_sec: 600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_open_files: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bytes_per_sync: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_background_flushes: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Compression algorithms supported: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: #011kZSTD supported: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: #011kXpressCompression supported: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: #011kBZip2Compression supported: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: #011kLZ4Compression supported: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: #011kZlibCompression supported: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: #011kSnappyCompression supported: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: DMutex implementation: pthread_mutex_t Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dd60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dd60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dd60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dd60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost systemd[1]: Started libpod-conmon-f59b1a2b2f90ec887f5cf9241f24b3e26570eb6a068387300a8a262f58be7674.scope. Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dd60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dd60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dd60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dfa0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb9610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dfa0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb9610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.merge_operator: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_filter_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.sst_partitioner_factory: None Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55797fe7dfa0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55797eeb9610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.write_buffer_size: 16777216 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number: 64 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression: LZ4 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression: Disabled Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.num_levels: 7 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.level: 32767 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.enabled: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.arena_block_size: 1048576 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 02:40:50 localhost systemd[1]: Started libcrun container. Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_support: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.bloom_locality: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.max_successive_merges: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.force_consistency_checks: 1 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.ttl: 2592000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_files: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.min_blob_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_size: 268435456 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3a89ed51-aa97-4c8c-8741-d02f18e7055a Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931650213332, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 1 02:40:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b62613d8e1ec585adf6ae8ec7e3389610f689ed50fffcc6732ab2e4b96caeee8/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931650224819, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769931650, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a89ed51-aa97-4c8c-8741-d02f18e7055a", "db_session_id": "6H3HWCHWKTA3X6X83VNM", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Feb 1 02:40:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b62613d8e1ec585adf6ae8ec7e3389610f689ed50fffcc6732ab2e4b96caeee8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931650232203, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769931650, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a89ed51-aa97-4c8c-8741-d02f18e7055a", "db_session_id": "6H3HWCHWKTA3X6X83VNM", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931650236656, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769931650, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a89ed51-aa97-4c8c-8741-d02f18e7055a", "db_session_id": "6H3HWCHWKTA3X6X83VNM", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769931650239931, "job": 1, "event": "recovery_finished"} Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Feb 1 02:40:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b62613d8e1ec585adf6ae8ec7e3389610f689ed50fffcc6732ab2e4b96caeee8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:50 localhost podman[32647]: 2026-02-01 07:40:50.24465484 +0000 UTC m=+0.467420764 container init f59b1a2b2f90ec887f5cf9241f24b3e26570eb6a068387300a8a262f58be7674 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_heyrovsky, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, release=1764794109, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 02:40:50 localhost podman[32647]: 2026-02-01 07:40:50.255423338 +0000 UTC m=+0.478189222 container start f59b1a2b2f90ec887f5cf9241f24b3e26570eb6a068387300a8a262f58be7674 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_heyrovsky, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, architecture=x86_64, vcs-type=git, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 02:40:50 localhost podman[32647]: 2026-02-01 07:40:50.255635804 +0000 UTC m=+0.478401748 container attach f59b1a2b2f90ec887f5cf9241f24b3e26570eb6a068387300a8a262f58be7674 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_heyrovsky, ceph=True, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, architecture=x86_64, build-date=2025-12-08T17:28:53Z, release=1764794109, vendor=Red Hat, Inc., version=7) Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55797ef80380 Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: DB pointer 0x55797fdd9a00 Feb 1 02:40:50 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 1 02:40:50 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _upgrade_super from 4, latest 4 Feb 1 02:40:50 localhost ceph-osd[32318]: bluestore(/var/lib/ceph/osd/ceph-5) _upgrade_super done Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 02:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55797eeb82d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55797eeb82d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55797eeb82d0#2 capacity: 460.80 MB usag Feb 1 02:40:50 localhost ceph-osd[32318]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Feb 1 02:40:50 localhost ceph-osd[32318]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Feb 1 02:40:50 localhost ceph-osd[32318]: _get_class not permitted to load lua Feb 1 02:40:50 localhost ceph-osd[32318]: _get_class not permitted to load sdk Feb 1 02:40:50 localhost ceph-osd[32318]: _get_class not permitted to load test_remote_reads Feb 1 02:40:50 localhost ceph-osd[32318]: osd.5 0 crush map has features 288232575208783872, adjusting msgr requires for clients Feb 1 02:40:50 localhost ceph-osd[32318]: osd.5 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Feb 1 02:40:50 localhost ceph-osd[32318]: osd.5 0 crush map has features 288232575208783872, adjusting msgr requires for osds Feb 1 02:40:50 localhost ceph-osd[32318]: osd.5 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Feb 1 02:40:50 localhost ceph-osd[32318]: osd.5 0 load_pgs Feb 1 02:40:50 localhost ceph-osd[32318]: osd.5 0 load_pgs opened 0 pgs Feb 1 02:40:50 localhost ceph-osd[32318]: osd.5 0 log_to_monitors true Feb 1 02:40:50 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5[32314]: 2026-02-01T07:40:50.276+0000 7f0ff47d0a80 -1 osd.5 0 log_to_monitors true Feb 1 02:40:50 localhost systemd[1]: tmp-crun.amLOiP.mount: Deactivated successfully. Feb 1 02:40:50 localhost systemd[1]: var-lib-containers-storage-overlay-66c3020dd7ec370c750e92e3346cf8a35f6796a895fc8a0f83255e23cf8dbfd3-merged.mount: Deactivated successfully. Feb 1 02:40:50 localhost ceph-osd[31357]: osd.2 11 tick checking mon for new map Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: { Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "91738c8a-fd02-4668-b2ac-8ebbd36126da": { Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "ceph_fsid": "33fac0b9-80c7-560f-918a-c92d3021ca1e", Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "osd_id": 2, Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "osd_uuid": "91738c8a-fd02-4668-b2ac-8ebbd36126da", Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "type": "bluestore" Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: }, Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "dc0298a4-c2cb-4512-baf8-45dcc8aa1439": { Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "ceph_fsid": "33fac0b9-80c7-560f-918a-c92d3021ca1e", Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "osd_id": 5, Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "osd_uuid": "dc0298a4-c2cb-4512-baf8-45dcc8aa1439", Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: "type": "bluestore" Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: } Feb 1 02:40:50 localhost relaxed_heyrovsky[32695]: } Feb 1 02:40:50 localhost systemd[1]: libpod-f59b1a2b2f90ec887f5cf9241f24b3e26570eb6a068387300a8a262f58be7674.scope: Deactivated successfully. Feb 1 02:40:50 localhost podman[32647]: 2026-02-01 07:40:50.910367124 +0000 UTC m=+1.133133088 container died f59b1a2b2f90ec887f5cf9241f24b3e26570eb6a068387300a8a262f58be7674 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_heyrovsky, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 02:40:50 localhost systemd[1]: var-lib-containers-storage-overlay-b62613d8e1ec585adf6ae8ec7e3389610f689ed50fffcc6732ab2e4b96caeee8-merged.mount: Deactivated successfully. Feb 1 02:40:51 localhost podman[32915]: 2026-02-01 07:40:51.013694345 +0000 UTC m=+0.090298854 container remove f59b1a2b2f90ec887f5cf9241f24b3e26570eb6a068387300a8a262f58be7674 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_heyrovsky, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, release=1764794109, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 02:40:51 localhost systemd[1]: libpod-conmon-f59b1a2b2f90ec887f5cf9241f24b3e26570eb6a068387300a8a262f58be7674.scope: Deactivated successfully. Feb 1 02:40:51 localhost ceph-osd[31357]: osd.2 13 state: booting -> active Feb 1 02:40:51 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Feb 1 02:40:51 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Feb 1 02:40:52 localhost ceph-osd[32318]: osd.5 0 done with init, starting boot process Feb 1 02:40:52 localhost ceph-osd[32318]: osd.5 0 start_boot Feb 1 02:40:52 localhost ceph-osd[32318]: osd.5 0 maybe_override_options_for_qos osd_max_backfills set to 1 Feb 1 02:40:52 localhost ceph-osd[32318]: osd.5 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Feb 1 02:40:52 localhost ceph-osd[32318]: osd.5 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Feb 1 02:40:52 localhost ceph-osd[32318]: osd.5 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Feb 1 02:40:52 localhost ceph-osd[32318]: osd.5 0 bench count 12288000 bsize 4 KiB Feb 1 02:40:52 localhost systemd[1]: tmp-crun.CTMiWs.mount: Deactivated successfully. Feb 1 02:40:52 localhost podman[33047]: 2026-02-01 07:40:52.53561403 +0000 UTC m=+0.080743719 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, release=1764794109, name=rhceph, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 02:40:52 localhost podman[33047]: 2026-02-01 07:40:52.676524532 +0000 UTC m=+0.221654151 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, release=1764794109, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Feb 1 02:40:53 localhost ceph-osd[31357]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients Feb 1 02:40:53 localhost ceph-osd[31357]: osd.2 15 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Feb 1 02:40:53 localhost ceph-osd[31357]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds Feb 1 02:40:53 localhost ceph-osd[31357]: osd.2 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [2] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:40:54 localhost ceph-osd[31357]: osd.2 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=16) [2,4,3] r=0 lpr=16 pi=[15,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [2] -> [2,4,3], acting [2] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:40:54 localhost ceph-osd[31357]: osd.2 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=16) [2,4,3] r=0 lpr=16 pi=[15,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:40:54 localhost ceph-osd[32318]: osd.5 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.832 iops: 7636.984 elapsed_sec: 0.393 Feb 1 02:40:54 localhost ceph-osd[32318]: log_channel(cluster) log [WRN] : OSD bench result of 7636.984107 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Feb 1 02:40:54 localhost ceph-osd[32318]: osd.5 0 waiting for initial osdmap Feb 1 02:40:54 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5[32314]: 2026-02-01T07:40:54.694+0000 7f0ff0f64640 -1 osd.5 0 waiting for initial osdmap Feb 1 02:40:54 localhost ceph-osd[32318]: osd.5 16 crush map has features 288514051259236352, adjusting msgr requires for clients Feb 1 02:40:54 localhost ceph-osd[32318]: osd.5 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Feb 1 02:40:54 localhost ceph-osd[32318]: osd.5 16 crush map has features 3314933000852226048, adjusting msgr requires for osds Feb 1 02:40:54 localhost ceph-osd[32318]: osd.5 16 check_osdmap_features require_osd_release unknown -> reef Feb 1 02:40:54 localhost ceph-osd[32318]: osd.5 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 1 02:40:54 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-osd-5[32314]: 2026-02-01T07:40:54.715+0000 7f0febd79640 -1 osd.5 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 1 02:40:54 localhost ceph-osd[32318]: osd.5 16 set_numa_affinity not setting numa affinity Feb 1 02:40:54 localhost ceph-osd[32318]: osd.5 16 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Feb 1 02:40:54 localhost podman[33235]: Feb 1 02:40:54 localhost podman[33235]: 2026-02-01 07:40:54.774896592 +0000 UTC m=+0.068435468 container create 5363a64e012e0ca14c34f02c953828773e1341d05894395587ab28312dcd1cab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_bassi, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, vendor=Red Hat, Inc.) Feb 1 02:40:54 localhost systemd[1]: Started libpod-conmon-5363a64e012e0ca14c34f02c953828773e1341d05894395587ab28312dcd1cab.scope. Feb 1 02:40:54 localhost systemd[1]: Started libcrun container. Feb 1 02:40:54 localhost podman[33235]: 2026-02-01 07:40:54.840047189 +0000 UTC m=+0.133586065 container init 5363a64e012e0ca14c34f02c953828773e1341d05894395587ab28312dcd1cab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_bassi, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, release=1764794109, ceph=True, vcs-type=git, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 02:40:54 localhost systemd[1]: tmp-crun.FZCXIA.mount: Deactivated successfully. Feb 1 02:40:54 localhost podman[33235]: 2026-02-01 07:40:54.756418039 +0000 UTC m=+0.049956935 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:54 localhost podman[33235]: 2026-02-01 07:40:54.85933978 +0000 UTC m=+0.152878626 container start 5363a64e012e0ca14c34f02c953828773e1341d05894395587ab28312dcd1cab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_bassi, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z) Feb 1 02:40:54 localhost podman[33235]: 2026-02-01 07:40:54.859699278 +0000 UTC m=+0.153238204 container attach 5363a64e012e0ca14c34f02c953828773e1341d05894395587ab28312dcd1cab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_bassi, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, RELEASE=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., architecture=x86_64, release=1764794109, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 02:40:54 localhost modest_bassi[33252]: 167 167 Feb 1 02:40:54 localhost systemd[1]: libpod-5363a64e012e0ca14c34f02c953828773e1341d05894395587ab28312dcd1cab.scope: Deactivated successfully. Feb 1 02:40:54 localhost podman[33235]: 2026-02-01 07:40:54.863183502 +0000 UTC m=+0.156722378 container died 5363a64e012e0ca14c34f02c953828773e1341d05894395587ab28312dcd1cab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_bassi, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1764794109, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Feb 1 02:40:54 localhost podman[33257]: 2026-02-01 07:40:54.961829742 +0000 UTC m=+0.084907299 container remove 5363a64e012e0ca14c34f02c953828773e1341d05894395587ab28312dcd1cab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_bassi, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, release=1764794109, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 02:40:54 localhost systemd[1]: libpod-conmon-5363a64e012e0ca14c34f02c953828773e1341d05894395587ab28312dcd1cab.scope: Deactivated successfully. Feb 1 02:40:55 localhost podman[33278]: Feb 1 02:40:55 localhost podman[33278]: 2026-02-01 07:40:55.159695576 +0000 UTC m=+0.071228508 container create 47631966979b6d34c4a3bf2abc3abca3a62b8a007e119dfc302fbe42fe8c20a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_hermann, name=rhceph, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=1764794109, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Feb 1 02:40:55 localhost systemd[1]: Started libpod-conmon-47631966979b6d34c4a3bf2abc3abca3a62b8a007e119dfc302fbe42fe8c20a9.scope. Feb 1 02:40:55 localhost systemd[1]: Started libcrun container. Feb 1 02:40:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50eaca4ffa7289cf3a114bd6e609e146c8d484c0d2ed8a9916aa89c6f2f56041/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:55 localhost podman[33278]: 2026-02-01 07:40:55.13077445 +0000 UTC m=+0.042307432 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 02:40:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50eaca4ffa7289cf3a114bd6e609e146c8d484c0d2ed8a9916aa89c6f2f56041/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50eaca4ffa7289cf3a114bd6e609e146c8d484c0d2ed8a9916aa89c6f2f56041/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 02:40:55 localhost ceph-osd[32318]: osd.5 17 state: booting -> active Feb 1 02:40:55 localhost podman[33278]: 2026-02-01 07:40:55.26137636 +0000 UTC m=+0.172909292 container init 47631966979b6d34c4a3bf2abc3abca3a62b8a007e119dfc302fbe42fe8c20a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_hermann, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, name=rhceph, distribution-scope=public) Feb 1 02:40:55 localhost ceph-osd[31357]: osd.2 pg_epoch: 17 pg[1.0( empty local-lis/les=16/17 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=16) [2,4,3] r=0 lpr=16 pi=[15,16)/0 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:40:55 localhost podman[33278]: 2026-02-01 07:40:55.272677561 +0000 UTC m=+0.184210493 container start 47631966979b6d34c4a3bf2abc3abca3a62b8a007e119dfc302fbe42fe8c20a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_hermann, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, release=1764794109, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 02:40:55 localhost podman[33278]: 2026-02-01 07:40:55.272896407 +0000 UTC m=+0.184429379 container attach 47631966979b6d34c4a3bf2abc3abca3a62b8a007e119dfc302fbe42fe8c20a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_hermann, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, version=7, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=1764794109, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 02:40:55 localhost systemd[1]: var-lib-containers-storage-overlay-c26321bde21cea8e9d3570bba4458286791b41fcce28dfb1e3f503e3d43c4052-merged.mount: Deactivated successfully. Feb 1 02:40:56 localhost ecstatic_hermann[33293]: [ Feb 1 02:40:56 localhost ecstatic_hermann[33293]: { Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "available": false, Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "ceph_device": false, Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "lsm_data": {}, Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "lvs": [], Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "path": "/dev/sr0", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "rejected_reasons": [ Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "Insufficient space (<5GB)", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "Has a FileSystem" Feb 1 02:40:56 localhost ecstatic_hermann[33293]: ], Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "sys_api": { Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "actuators": null, Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "device_nodes": "sr0", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "human_readable_size": "482.00 KB", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "id_bus": "ata", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "model": "QEMU DVD-ROM", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "nr_requests": "2", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "partitions": {}, Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "path": "/dev/sr0", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "removable": "1", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "rev": "2.5+", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "ro": "0", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "rotational": "1", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "sas_address": "", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "sas_device_handle": "", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "scheduler_mode": "mq-deadline", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "sectors": 0, Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "sectorsize": "2048", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "size": 493568.0, Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "support_discard": "0", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "type": "disk", Feb 1 02:40:56 localhost ecstatic_hermann[33293]: "vendor": "QEMU" Feb 1 02:40:56 localhost ecstatic_hermann[33293]: } Feb 1 02:40:56 localhost ecstatic_hermann[33293]: } Feb 1 02:40:56 localhost ecstatic_hermann[33293]: ] Feb 1 02:40:56 localhost systemd[1]: libpod-47631966979b6d34c4a3bf2abc3abca3a62b8a007e119dfc302fbe42fe8c20a9.scope: Deactivated successfully. Feb 1 02:40:56 localhost podman[34766]: 2026-02-01 07:40:56.259422162 +0000 UTC m=+0.055341819 container died 47631966979b6d34c4a3bf2abc3abca3a62b8a007e119dfc302fbe42fe8c20a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_hermann, distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, GIT_CLEAN=True, version=7) Feb 1 02:40:56 localhost systemd[1]: var-lib-containers-storage-overlay-50eaca4ffa7289cf3a114bd6e609e146c8d484c0d2ed8a9916aa89c6f2f56041-merged.mount: Deactivated successfully. Feb 1 02:40:56 localhost podman[34766]: 2026-02-01 07:40:56.300023677 +0000 UTC m=+0.095943284 container remove 47631966979b6d34c4a3bf2abc3abca3a62b8a007e119dfc302fbe42fe8c20a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_hermann, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, io.openshift.expose-services=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=1764794109, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux ) Feb 1 02:40:56 localhost systemd[1]: libpod-conmon-47631966979b6d34c4a3bf2abc3abca3a62b8a007e119dfc302fbe42fe8c20a9.scope: Deactivated successfully. Feb 1 02:41:05 localhost podman[34899]: 2026-02-01 07:41:05.232097409 +0000 UTC m=+0.096213690 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, RELEASE=main, build-date=2025-12-08T17:28:53Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 02:41:05 localhost podman[34899]: 2026-02-01 07:41:05.361818761 +0000 UTC m=+0.225935062 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, distribution-scope=public, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7, io.openshift.tags=rhceph ceph) Feb 1 02:41:40 localhost systemd[26134]: Starting Mark boot as successful... Feb 1 02:41:40 localhost systemd[26134]: Finished Mark boot as successful. Feb 1 02:42:07 localhost podman[35077]: 2026-02-01 07:42:07.034366294 +0000 UTC m=+0.079311349 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public) Feb 1 02:42:07 localhost podman[35077]: 2026-02-01 07:42:07.1348808 +0000 UTC m=+0.179825815 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, description=Red Hat Ceph Storage 7, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph) Feb 1 02:42:18 localhost systemd[1]: session-13.scope: Deactivated successfully. Feb 1 02:42:18 localhost systemd[1]: session-13.scope: Consumed 20.971s CPU time. Feb 1 02:42:18 localhost systemd-logind[761]: Session 13 logged out. Waiting for processes to exit. Feb 1 02:42:18 localhost systemd-logind[761]: Removed session 13. Feb 1 02:44:40 localhost systemd[26134]: Created slice User Background Tasks Slice. Feb 1 02:44:40 localhost systemd[26134]: Starting Cleanup of User's Temporary Files and Directories... Feb 1 02:44:40 localhost systemd[26134]: Finished Cleanup of User's Temporary Files and Directories. Feb 1 02:45:43 localhost sshd[35453]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:45:44 localhost systemd-logind[761]: New session 27 of user zuul. Feb 1 02:45:44 localhost systemd[1]: Started Session 27 of User zuul. Feb 1 02:45:44 localhost python3[35501]: ansible-ansible.legacy.ping Invoked with data=pong Feb 1 02:45:45 localhost python3[35546]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 02:45:45 localhost python3[35566]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604215.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 1 02:45:46 localhost python3[35622]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:45:46 localhost python3[35665]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769931946.126945-65903-19266130206660/source _original_basename=tmpr_kl_g4x follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:45:47 localhost python3[35695]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:45:47 localhost python3[35711]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:45:48 localhost python3[35727]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:45:49 localhost python3[35743]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3PGk8eQ/HLnsyRzL8h5NfdCJluWZpaEZ6rXdDgbE0lw7uxHSFELY1SydQI6S9XXHDytudbXCsaTQesdKbdbGdHJj+Vg1gGMFqFoy4uSiBfcCXPrrCkLSus2YLNROASby08xEMRmyMtENrRZxLnhhab6au+uTTgjrzCQYER0PPqsmRyQSw+7T1mpjYdlu7KIQYLe0QTYZWg9qnRz3OQs3ed297w+gXNzQDadWOmWRrqVrG76umhtGZrmJCY+I0xUANvOtiQSFT89RlUBKK2jyA9a/TXr/TBu9+r7PJ/Y4ayoabn3z0m1V8WEY0u5V2/k3yqFndPYU//bBN0nlq90J+EMZPG7yU8fXbmL3KQQG9wWh4grfR0sRjBLd3o2eYVr2minX8gho1p+AosyJZ8aSpq86KLny3WC9JVc4/RqUWVvQ34IbOKg2Ef1+HJDFpRGvPN6pvTfUfBHSYnk3sX22e11wLjEi2Z+2kffa1GY++d6pvqQLop2x0re8+mhNTRaE= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:45:50 localhost python3[35757]: ansible-ping Invoked with data=pong Feb 1 02:46:00 localhost sshd[35759]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:46:01 localhost systemd-logind[761]: New session 28 of user tripleo-admin. Feb 1 02:46:01 localhost systemd[1]: Created slice User Slice of UID 1003. Feb 1 02:46:01 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Feb 1 02:46:01 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Feb 1 02:46:01 localhost systemd[1]: Starting User Manager for UID 1003... Feb 1 02:46:01 localhost systemd[35763]: Queued start job for default target Main User Target. Feb 1 02:46:01 localhost systemd[35763]: Created slice User Application Slice. Feb 1 02:46:01 localhost systemd[35763]: Started Mark boot as successful after the user session has run 2 minutes. Feb 1 02:46:01 localhost systemd[35763]: Started Daily Cleanup of User's Temporary Directories. Feb 1 02:46:01 localhost systemd[35763]: Reached target Paths. Feb 1 02:46:01 localhost systemd[35763]: Reached target Timers. Feb 1 02:46:01 localhost systemd[35763]: Starting D-Bus User Message Bus Socket... Feb 1 02:46:01 localhost systemd[35763]: Starting Create User's Volatile Files and Directories... Feb 1 02:46:01 localhost systemd[35763]: Listening on D-Bus User Message Bus Socket. Feb 1 02:46:01 localhost systemd[35763]: Finished Create User's Volatile Files and Directories. Feb 1 02:46:01 localhost systemd[35763]: Reached target Sockets. Feb 1 02:46:01 localhost systemd[35763]: Reached target Basic System. Feb 1 02:46:01 localhost systemd[35763]: Reached target Main User Target. Feb 1 02:46:01 localhost systemd[35763]: Startup finished in 121ms. Feb 1 02:46:01 localhost systemd[1]: Started User Manager for UID 1003. Feb 1 02:46:01 localhost systemd[1]: Started Session 28 of User tripleo-admin. Feb 1 02:46:01 localhost python3[35825]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Feb 1 02:46:06 localhost python3[35845]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Feb 1 02:46:07 localhost python3[35861]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Feb 1 02:46:08 localhost python3[35909]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.8k3rx8dptmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:46:08 localhost python3[35939]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.8k3rx8dptmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:46:09 localhost python3[35955]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.8k3rx8dptmphosts insertbefore=BOF block=172.17.0.106 np0005604212.localdomain np0005604212#012172.18.0.106 np0005604212.storage.localdomain np0005604212.storage#012172.20.0.106 np0005604212.storagemgmt.localdomain np0005604212.storagemgmt#012172.17.0.106 np0005604212.internalapi.localdomain np0005604212.internalapi#012172.19.0.106 np0005604212.tenant.localdomain np0005604212.tenant#012192.168.122.106 np0005604212.ctlplane.localdomain np0005604212.ctlplane#012172.17.0.107 np0005604213.localdomain np0005604213#012172.18.0.107 np0005604213.storage.localdomain np0005604213.storage#012172.20.0.107 np0005604213.storagemgmt.localdomain np0005604213.storagemgmt#012172.17.0.107 np0005604213.internalapi.localdomain np0005604213.internalapi#012172.19.0.107 np0005604213.tenant.localdomain np0005604213.tenant#012192.168.122.107 np0005604213.ctlplane.localdomain np0005604213.ctlplane#012172.17.0.108 np0005604215.localdomain np0005604215#012172.18.0.108 np0005604215.storage.localdomain np0005604215.storage#012172.20.0.108 np0005604215.storagemgmt.localdomain np0005604215.storagemgmt#012172.17.0.108 np0005604215.internalapi.localdomain np0005604215.internalapi#012172.19.0.108 np0005604215.tenant.localdomain np0005604215.tenant#012192.168.122.108 np0005604215.ctlplane.localdomain np0005604215.ctlplane#012172.17.0.103 np0005604209.localdomain np0005604209#012172.18.0.103 np0005604209.storage.localdomain np0005604209.storage#012172.20.0.103 np0005604209.storagemgmt.localdomain np0005604209.storagemgmt#012172.17.0.103 np0005604209.internalapi.localdomain np0005604209.internalapi#012172.19.0.103 np0005604209.tenant.localdomain np0005604209.tenant#012192.168.122.103 np0005604209.ctlplane.localdomain np0005604209.ctlplane#012172.17.0.104 np0005604210.localdomain np0005604210#012172.18.0.104 np0005604210.storage.localdomain np0005604210.storage#012172.20.0.104 np0005604210.storagemgmt.localdomain np0005604210.storagemgmt#012172.17.0.104 np0005604210.internalapi.localdomain np0005604210.internalapi#012172.19.0.104 np0005604210.tenant.localdomain np0005604210.tenant#012192.168.122.104 np0005604210.ctlplane.localdomain np0005604210.ctlplane#012172.17.0.105 np0005604211.localdomain np0005604211#012172.18.0.105 np0005604211.storage.localdomain np0005604211.storage#012172.20.0.105 np0005604211.storagemgmt.localdomain np0005604211.storagemgmt#012172.17.0.105 np0005604211.internalapi.localdomain np0005604211.internalapi#012172.19.0.105 np0005604211.tenant.localdomain np0005604211.tenant#012192.168.122.105 np0005604211.ctlplane.localdomain np0005604211.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.154 overcloud.storage.localdomain#012172.20.0.122 overcloud.storagemgmt.localdomain#012172.17.0.228 overcloud.internalapi.localdomain#012172.21.0.164 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:46:10 localhost python3[35971]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.8k3rx8dptmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:46:10 localhost python3[35988]: ansible-file Invoked with path=/tmp/ansible.8k3rx8dptmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:46:11 localhost python3[36004]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:46:12 localhost python3[36021]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:46:17 localhost python3[36116]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:46:17 localhost python3[36133]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:47:24 localhost kernel: SELinux: Converting 2698 SID table entries... Feb 1 02:47:24 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 02:47:24 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 02:47:24 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 02:47:24 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 02:47:24 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 02:47:24 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 02:47:24 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 02:47:24 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=6 res=1 Feb 1 02:47:24 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 02:47:24 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 02:47:24 localhost systemd[1]: Reloading. Feb 1 02:47:24 localhost systemd-rc-local-generator[37426]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:47:24 localhost systemd-sysv-generator[37430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:47:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:47:24 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 02:47:25 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 02:47:25 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 02:47:25 localhost systemd[1]: run-rc6c62f276abf46498b39f4472c490dd4.service: Deactivated successfully. Feb 1 02:47:33 localhost python3[37869]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:47:35 localhost python3[38008]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:47:35 localhost systemd[1]: Reloading. Feb 1 02:47:35 localhost systemd-rc-local-generator[38038]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:47:35 localhost systemd-sysv-generator[38041]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:47:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:47:36 localhost python3[38062]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:47:37 localhost python3[38078]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:47:38 localhost python3[38095]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 1 02:47:38 localhost python3[38113]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:47:39 localhost python3[38131]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:47:39 localhost python3[38149]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 02:47:39 localhost systemd[1]: Reloading Network Manager... Feb 1 02:47:39 localhost NetworkManager[5972]: [1769932059.7595] audit: op="reload" arg="0" pid=38152 uid=0 result="success" Feb 1 02:47:39 localhost NetworkManager[5972]: [1769932059.7607] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Feb 1 02:47:39 localhost NetworkManager[5972]: [1769932059.7607] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Feb 1 02:47:39 localhost systemd[1]: Reloaded Network Manager. Feb 1 02:47:41 localhost python3[38168]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:47:41 localhost python3[38185]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:47:42 localhost python3[38203]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:47:42 localhost python3[38219]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:47:43 localhost python3[38235]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Feb 1 02:47:43 localhost python3[38251]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:47:44 localhost python3[38267]: ansible-blockinfile Invoked with path=/tmp/ansible.g_mn0wml block=[192.168.122.106]*,[np0005604212.ctlplane.localdomain]*,[172.17.0.106]*,[np0005604212.internalapi.localdomain]*,[172.18.0.106]*,[np0005604212.storage.localdomain]*,[172.20.0.106]*,[np0005604212.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005604212.tenant.localdomain]*,[np0005604212.localdomain]*,[np0005604212]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx/MKX//74FswFkw1c1lfM5mahSRoD4B8bhCZSm2/IQ//syuq+Qpi1sEoMv/N1mOrU8atXNtYkVNozl/ypDe2YJkUS8OTt37bT9A7XnBlfFSc5OwXS7VGHpVWbiMbImJibSV7HjoQP0yA8SvCJCcrI3Eh14+cna8tT1rJ9lOFRHvxLfG52XnzFiNUVDU+TG3uRtWEjY5epI8j/U73tEqdP4OAk7ZQ9riN1nllCCIs9FOErOEw14VW+151TbOCzcm9kvzeQMit9jPXTGqmTPKoidZFLhJwEAXq4M9+DFfKQWkVSqfcU3cvPz6S03lUcpPWiJxgGZiIPXxCdRjvI3bKCm898lFYwZq8EfdAwUFMyhmz4GHSyhMwqZWE46cikXf/skoSrEF8ji3NjmyQL7T304iKenZca6rHDI56veO0+PTzZj/pBiaWBWXlqF0WQLAn804z3yapsLNuR8R4EaREmk1Tc2ESg1//73pCUypwEMQWESHsAJ/LCHhyqNHY6Bjc=#012[192.168.122.107]*,[np0005604213.ctlplane.localdomain]*,[172.17.0.107]*,[np0005604213.internalapi.localdomain]*,[172.18.0.107]*,[np0005604213.storage.localdomain]*,[172.20.0.107]*,[np0005604213.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005604213.tenant.localdomain]*,[np0005604213.localdomain]*,[np0005604213]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhh44DuXnO4hBZJvT1vLnO8ZhT8GKLkBI0M+Q/lXSbHymnCyNerLMqVRhTb5ZUw07lkP6FtBJS95SUtdJuAbUi4jphShtJfBdicoa+uGqI1icHUQCbtCAACtas0lGeGi5q/q1LfzeuKh+LTRj60W+r2OZoChKxeSWYBQ8gIScKe1HgVCJVEESXwNv4CBs6ffOWVYHE+3JDUA3AN3nX931xw4oLMBkwi0q4sNh9Sb0oS79OX+dKdlGfnPLLWKF9QrLrHYdHVkKtPre9d1BdNkl38gRE45uwrAAxXBfeZjbzzfbUlWb54SZwL8P2ej29L5VAbE/97j1HD6+kUZ5wFb6v9oJyFwq8udFDqO1SUMkW4t1VmwD5G4rIU2+u0yHd4H7//fgbf8WAhPv1Qx5tXEqB6LIHqYCz7RekNQO5Xv8ge/gVMzzlxB0DJP6a4DJ8E0/Djnyzw81L2fmyeriPLqt/n/wHscNr1RRI4T1X2iINRwk5QfrxwTEHhJ00FY1kB90=#012[192.168.122.108]*,[np0005604215.ctlplane.localdomain]*,[172.17.0.108]*,[np0005604215.internalapi.localdomain]*,[172.18.0.108]*,[np0005604215.storage.localdomain]*,[172.20.0.108]*,[np0005604215.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005604215.tenant.localdomain]*,[np0005604215.localdomain]*,[np0005604215]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/jKlZ/vxfazmNjpekfENGpQi8TTD6ErYy0BH9P8CRIiiKVdA/53XGSAQlY17b4tT5hzyHsUuXDmbv5R98FSy/Fi8F4KrjgogVPhd/zYoMrffr9ydwv+ih2mIyCPjZC+N92i92gM2OBHBXj5vqyh5yl1t4H1LhFab7P/m42K75mcTytGvGTLKXZbcs/1Ot/APGrs5wqg/c9XFQtgBEn6ttSKQ9caqbgUw88VGRkzaHvzheQvtIjZL0AwigTS24tqFx+bF+liSnSaYk1R8TKe1yMNODv5OCUmFYvPqls4Y3AQkpuroQQXHcQCe0QPuz9nGgPebNOxyTHsK66oDWIUskoYIbrZZhjDxlpdzJ+POEU/jXtGox0/0wlpRK7jNN6r4Fzx6uIzxB5SWn/UJ4BYS853pUsC32TeD0pZXfUAzOGUOzQfvYkUCElyRi8zDN4ubwEWnxvCEPaAFihafbviqQwLNFFmth36owDHV2zU/Q/BtW8vrwfx0cPr2A4WvQvp8=#012[192.168.122.103]*,[np0005604209.ctlplane.localdomain]*,[172.17.0.103]*,[np0005604209.internalapi.localdomain]*,[172.18.0.103]*,[np0005604209.storage.localdomain]*,[172.20.0.103]*,[np0005604209.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005604209.tenant.localdomain]*,[np0005604209.localdomain]*,[np0005604209]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAdXF2/8XBq3bWgr/9swIkzjlkm7PzpC1vdYXglaExGeIUwK5n05/HLobUMrYjOh6yE81+tctBT51wuPLw9qOGf4X3lRx3x0AHUqWSs00OL5nZsMRAd6PknZVyeCWf9jv13mVWIExCYbP8e4VK4M3w1m2xSLFd1aHtGkEUYJKCmacxrxFu2opq+kNCclpMC0BlFeSeX/NZeGwcfVCEyP46JVB9pNDo6D4s98FzzQNtG4DTv8NqE0S8Fj44dajq/80IKXeVEbhVmBikwFGMMEHhsRass2m0Q0rBw1Cv2jqW9hrTO1AWHY2aNDDqr6cKttP27XKfc/unDFFDb0mcc/HRa8JAUYEvuO0FIV6n28+Q5hWoYHAZfMU15U/bQPN1UxbF/MmSIZWvwY+vzCJ+icSJ9qfhDfbd1DttRuV0F3Jdi0jq01TyyPdOz8qT7kKSftD3Awn6BNLlseR8MaOTS+YF4fOnSP/xzj0B+nx/nr5Mrq8+QzKb2YyqdMfWWMGdCw8=#012[192.168.122.104]*,[np0005604210.ctlplane.localdomain]*,[172.17.0.104]*,[np0005604210.internalapi.localdomain]*,[172.18.0.104]*,[np0005604210.storage.localdomain]*,[172.20.0.104]*,[np0005604210.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005604210.tenant.localdomain]*,[np0005604210.localdomain]*,[np0005604210]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeVlqpmEgZX6yoZkE7SzVbEM6MqJe/9qDZPPgFZPb/N85k+uB3cINsoq0pMJYeKjcKY8H56WyuNkVVwVHaouZnJCN4p1rCJmATIDieU8QMDwGucQpbrNRrQWheWQDkmHNIPOxnUDCRgEzDfYiaE4prLHMPKtf8XJAKUKVd6lpZrVSCovGz0UC3U1Le/0N1PJOi4kYEuipVrcfoYHC63A32I+w+7tybU8Rpknhc/UHhdn39PBGuAhbkSf2JEJbLLzLaPkZXT6HOPiBUT9jWKnymCGEcfPjIWOkeelx3fkPoXZCtnYHlSoQSkCVsUmXgHNj7X3+6sJi9+iV/+8jRWQyk6aCC+HjXDhSwxbBUaM9AOimJ9EK7vo8/IK9pQ3gNsEct6rHuvGytACNMWpaT5sRRaVEnS8uz/PL8urB6+59GYGunjAaw8lCQcxw+VNVJaLtj+BpVJZA2EA6XE4fwq7v0s9u0ApIMSyV3DcYzIcDFlT11I5g3RM8vZNipXfnub3U=#012[192.168.122.105]*,[np0005604211.ctlplane.localdomain]*,[172.17.0.105]*,[np0005604211.internalapi.localdomain]*,[172.18.0.105]*,[np0005604211.storage.localdomain]*,[172.20.0.105]*,[np0005604211.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005604211.tenant.localdomain]*,[np0005604211.localdomain]*,[np0005604211]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQ5JUOdiESLpaYomijw3u9LxHN4VxpmenW9EczyVvVdofuEESAIR1Q8BIVkW7gxgVyrzHxOpbaoAS+aZaKazruu7/chC8MkDw1lvfeyQwMZax6UziUan2wIFVTaCc7kITOHrdWkJm+OIvCs/ImtkSgsTmvTiQedvs86ME3gHNyA+7taoDXnH6UCB6d5ex6PzwXsKI03iUVWFfsGP3ZU7r52IBwgrLG+VplbaPBRNNP/RvKULVsokG3UCMd3pjHv3VYBdXPYTFOPf666ZEuxEz+Frz43oXzEhr4W61RN70cAFJDDFoOmBDxXzZqrmF7r1vSV3ojl+aHaVLCGL4Wnjrp9wl5Zq8XCGN/7ttzaZKrjj/flccfBEiYL9odgqp92EjmxsRqG4bFq/nEzS/DTJ88QQVpGQNC2T6bElJVdBIrpZAyv7n5HlwNQwfsltQtzbqe1E32azZb1wq13ajV9Ii7QrVd81nGYFM79NqiVVbXs5NypsJOMQ6ZoqyHK5+yyHk=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:47:44 localhost python3[38283]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.g_mn0wml' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:47:45 localhost python3[38301]: ansible-file Invoked with path=/tmp/ansible.g_mn0wml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:47:46 localhost python3[38317]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 02:47:46 localhost python3[38333]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:47:46 localhost python3[38351]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:47:47 localhost python3[38370]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Feb 1 02:47:49 localhost python3[38507]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:47:50 localhost python3[38524]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:47:53 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Feb 1 02:47:53 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Feb 1 02:47:53 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 02:47:53 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 02:47:53 localhost systemd[1]: Reloading. Feb 1 02:47:53 localhost systemd-sysv-generator[38595]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:47:53 localhost systemd-rc-local-generator[38588]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:47:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:47:53 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 02:47:53 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Feb 1 02:47:54 localhost systemd[1]: tuned.service: Deactivated successfully. Feb 1 02:47:54 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Feb 1 02:47:54 localhost systemd[1]: tuned.service: Consumed 1.455s CPU time. Feb 1 02:47:54 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 1 02:47:54 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 02:47:54 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 02:47:54 localhost systemd[1]: run-r63042f02508a4418809bd1306bcf4753.service: Deactivated successfully. Feb 1 02:47:55 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 1 02:47:55 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 02:47:55 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 02:47:55 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 02:47:55 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 02:47:55 localhost systemd[1]: run-r1a023bd0904c40ffaf91e2c965e2a52d.service: Deactivated successfully. Feb 1 02:47:56 localhost python3[38961]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:47:56 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Feb 1 02:47:56 localhost systemd[1]: tuned.service: Deactivated successfully. Feb 1 02:47:56 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Feb 1 02:47:56 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 1 02:47:58 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 1 02:47:58 localhost python3[39156]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:47:59 localhost python3[39173]: ansible-slurp Invoked with src=/etc/tuned/active_profile Feb 1 02:47:59 localhost python3[39189]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:48:00 localhost python3[39205]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:01 localhost python3[39225]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:02 localhost python3[39242]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:48:04 localhost python3[39258]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:10 localhost python3[39274]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:11 localhost python3[39322]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:11 localhost python3[39367]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932090.879111-70662-167416958266394/source _original_basename=tmpu1d0fu25 follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:12 localhost python3[39397]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:12 localhost python3[39445]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:13 localhost python3[39488]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932092.4219818-70756-76883123223862/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=d0cfa4bd89bcc42c9513572d4ad38f679529236d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:13 localhost python3[39550]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:14 localhost python3[39593]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932093.3264253-70815-4068009467879/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=1f1a1c0de88e28e1c405f8e299af3f6bf8624260 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:14 localhost python3[39655]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:14 localhost python3[39698]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932094.2129455-70815-100366947357955/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=a14c85776d2e39c2e9398053dff459a83e663446 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:15 localhost python3[39760]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:15 localhost python3[39803]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932095.1389735-70815-89402326267497/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=1bd75eeb71ad8a06f7ad5bd2e02e7279e09e867f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:16 localhost python3[39894]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:16 localhost python3[39953]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932096.1468594-70815-269833929399320/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:17 localhost python3[40032]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:17 localhost python3[40075]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932096.9315221-70815-7637395490528/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=4b0728b1a4158e6417d66a1cc37f4e4d26059385 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:18 localhost python3[40152]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:18 localhost python3[40195]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932097.7887776-70815-280030628976059/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:18 localhost python3[40257]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:19 localhost python3[40300]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932098.6073034-70815-63759684348808/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=b186a8b61d7f8cda474e1db6d9f709185a517ec4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:19 localhost python3[40362]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:20 localhost python3[40405]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932099.4436457-70815-195874346355848/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:20 localhost python3[40467]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:20 localhost python3[40510]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932100.3453958-70815-50343333910865/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:21 localhost python3[40572]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:21 localhost python3[40615]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932101.1587975-70815-155741913345682/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=3d5ed7edeabd971026d9e415515c8db40416d5cd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:22 localhost python3[40645]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:48:23 localhost python3[40693]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:48:23 localhost python3[40736]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932103.079721-71631-132883807427961/source _original_basename=tmpaj41dn9z follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:48:28 localhost python3[40766]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 02:48:28 localhost python3[40827]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:33 localhost python3[40844]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:38 localhost python3[40861]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:39 localhost python3[40884]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:40 localhost systemd[35763]: Starting Mark boot as successful... Feb 1 02:48:40 localhost systemd[35763]: Finished Mark boot as successful. Feb 1 02:48:43 localhost python3[40902]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:44 localhost python3[40925]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:48 localhost python3[40942]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:53 localhost python3[40959]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:53 localhost python3[40982]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:48:58 localhost python3[40999]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:02 localhost python3[41016]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:03 localhost python3[41039]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:07 localhost python3[41056]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:12 localhost python3[41073]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:12 localhost python3[41096]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:17 localhost python3[41113]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:22 localhost python3[41208]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:22 localhost python3[41256]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:22 localhost python3[41274]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpjdco962d recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:23 localhost python3[41304]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:24 localhost python3[41352]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:24 localhost python3[41370]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:25 localhost python3[41432]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:25 localhost python3[41450]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:25 localhost python3[41512]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:26 localhost python3[41530]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:26 localhost python3[41592]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:26 localhost python3[41610]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:27 localhost python3[41672]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:27 localhost python3[41690]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:28 localhost python3[41752]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:28 localhost python3[41770]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:28 localhost python3[41832]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:28 localhost python3[41850]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:29 localhost python3[41912]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:29 localhost python3[41930]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:30 localhost python3[41992]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:30 localhost python3[42010]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:30 localhost python3[42072]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:31 localhost python3[42090]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:31 localhost python3[42152]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:31 localhost python3[42170]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:32 localhost python3[42200]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:49:33 localhost python3[42248]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:33 localhost python3[42266]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpnk58qvu3 recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:36 localhost python3[42296]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:49:41 localhost python3[42313]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:49:41 localhost python3[42331]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:49:42 localhost python3[42349]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:49:42 localhost systemd[1]: Reloading. Feb 1 02:49:42 localhost systemd-sysv-generator[42379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:49:42 localhost systemd-rc-local-generator[42376]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:49:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:49:42 localhost systemd[1]: Starting Netfilter Tables... Feb 1 02:49:42 localhost systemd[1]: Finished Netfilter Tables. Feb 1 02:49:43 localhost python3[42439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:43 localhost python3[42482]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932182.985839-74376-100395968078890/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:44 localhost python3[42512]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:44 localhost python3[42530]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:45 localhost python3[42579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:45 localhost python3[42622]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932184.8879824-74487-153447734538570/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:46 localhost python3[42684]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:46 localhost python3[42727]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932185.8449843-74545-210916059093489/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:47 localhost python3[42789]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:47 localhost python3[42832]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932186.8493836-74611-201200344791116/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:48 localhost python3[42894]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:48 localhost python3[42937]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932187.7543635-74857-56068773496544/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:49 localhost python3[42999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:49:49 localhost python3[43042]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932188.643439-74907-38474285717260/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:50 localhost python3[43072]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:50 localhost python3[43137]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:49:51 localhost python3[43154]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:52 localhost python3[43171]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:49:52 localhost python3[43190]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:49:52 localhost python3[43206]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:49:53 localhost python3[43222]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:49:53 localhost python3[43238]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Feb 1 02:49:54 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=7 res=1 Feb 1 02:49:55 localhost python3[43258]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 1 02:49:55 localhost kernel: SELinux: Converting 2702 SID table entries... Feb 1 02:49:55 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 02:49:55 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 02:49:55 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 02:49:55 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 02:49:55 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 02:49:55 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 02:49:55 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 02:49:55 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=8 res=1 Feb 1 02:49:56 localhost python3[43279]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 1 02:49:56 localhost kernel: SELinux: Converting 2702 SID table entries... Feb 1 02:49:56 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 02:49:56 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 02:49:56 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 02:49:56 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 02:49:56 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 02:49:56 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 02:49:56 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 02:49:57 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=9 res=1 Feb 1 02:49:57 localhost python3[43300]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 1 02:49:58 localhost kernel: SELinux: Converting 2702 SID table entries... Feb 1 02:49:58 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 02:49:58 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 02:49:58 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 02:49:58 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 02:49:58 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 02:49:58 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 02:49:58 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 02:49:58 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=10 res=1 Feb 1 02:49:58 localhost python3[43321]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:49:58 localhost python3[43337]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:49:59 localhost python3[43353]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:49:59 localhost python3[43369]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:50:00 localhost python3[43385]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:00 localhost python3[43402]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:50:04 localhost python3[43419]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:04 localhost python3[43467]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:05 localhost python3[43510]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932204.5064027-75690-235995805173519/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:05 localhost python3[43540]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 02:50:05 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 1 02:50:05 localhost systemd[1]: Stopped Load Kernel Modules. Feb 1 02:50:05 localhost systemd[1]: Stopping Load Kernel Modules... Feb 1 02:50:05 localhost systemd[1]: Starting Load Kernel Modules... Feb 1 02:50:05 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 1 02:50:05 localhost kernel: Bridge firewalling registered Feb 1 02:50:05 localhost systemd-modules-load[43543]: Inserted module 'br_netfilter' Feb 1 02:50:05 localhost systemd-modules-load[43543]: Module 'msr' is built in Feb 1 02:50:05 localhost systemd[1]: Finished Load Kernel Modules. Feb 1 02:50:06 localhost python3[43594]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:06 localhost python3[43637]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932205.9383452-75738-184500077012120/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:07 localhost python3[43667]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:07 localhost python3[43684]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:07 localhost python3[43702]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:08 localhost python3[43720]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:08 localhost python3[43737]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:08 localhost python3[43754]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:09 localhost python3[43771]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:09 localhost python3[43789]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:09 localhost python3[43807]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:10 localhost python3[43825]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:10 localhost python3[43843]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:10 localhost python3[43861]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:10 localhost python3[43879]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:11 localhost python3[43897]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:11 localhost python3[43914]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:11 localhost python3[43931]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:12 localhost python3[43948]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:12 localhost python3[43965]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 1 02:50:12 localhost python3[43983]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 02:50:12 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 1 02:50:12 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 1 02:50:12 localhost systemd[1]: Stopping Apply Kernel Variables... Feb 1 02:50:12 localhost systemd[1]: Starting Apply Kernel Variables... Feb 1 02:50:12 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 1 02:50:12 localhost systemd[1]: Finished Apply Kernel Variables. Feb 1 02:50:13 localhost python3[44003]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:13 localhost python3[44019]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:13 localhost python3[44035]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:14 localhost python3[44051]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:50:14 localhost python3[44067]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:14 localhost python3[44083]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:15 localhost python3[44099]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:15 localhost python3[44115]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:15 localhost python3[44131]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:16 localhost python3[44179]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:16 localhost python3[44222]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932215.8917336-76107-114380676750876/source _original_basename=tmpuhz2h57s follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:16 localhost python3[44252]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:18 localhost python3[44269]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:18 localhost python3[44317]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:18 localhost python3[44360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932218.2639725-76212-243758819772904/source _original_basename=tmpyftewfz3 follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:19 localhost python3[44390]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:19 localhost python3[44406]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:20 localhost python3[44422]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:20 localhost python3[44438]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:20 localhost python3[44454]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:20 localhost python3[44498]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:21 localhost python3[44524]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:21 localhost python3[44566]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:21 localhost python3[44598]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:22 localhost python3[44646]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Feb 1 02:50:23 localhost python3[44683]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604215.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 1 02:50:23 localhost python3[44707]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Feb 1 02:50:23 localhost python3[44723]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:24 localhost python3[44772]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:24 localhost python3[44815]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932223.9916744-76716-5648114303049/source _original_basename=tmpjcbnvtmg follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:25 localhost python3[44845]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Feb 1 02:50:25 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=11 res=1 Feb 1 02:50:26 localhost python3[44865]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:26 localhost python3[44881]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:26 localhost python3[44897]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Feb 1 02:50:28 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=12 res=1 Feb 1 02:50:28 localhost python3[44917]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:50:31 localhost python3[44934]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 02:50:31 localhost python3[44995]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:32 localhost python3[45011]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:32 localhost python3[45072]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:33 localhost python3[45115]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932232.4804363-77041-35579151803985/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=4a4ec5a7bbea6767597329319374590966ea2f65 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:33 localhost python3[45177]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:34 localhost python3[45222]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932233.515498-77132-212304177395739/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:34 localhost python3[45252]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:34 localhost python3[45268]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:35 localhost python3[45284]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:35 localhost python3[45300]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:36 localhost python3[45348]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:36 localhost python3[45391]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932235.9042277-77250-181579216637399/source _original_basename=tmposo12t2p follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:36 localhost python3[45421]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:37 localhost python3[45437]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:38 localhost python3[45453]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:50:41 localhost python3[45502]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:41 localhost python3[45547]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932241.1113806-77545-204720109673888/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:42 localhost python3[45578]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:50:42 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 1 02:50:42 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 1 02:50:42 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 1 02:50:42 localhost systemd[1]: sshd.service: Consumed 2.936s CPU time, read 1.9M from disk, written 72.0K to disk. Feb 1 02:50:42 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 1 02:50:42 localhost systemd[1]: Stopping sshd-keygen.target... Feb 1 02:50:42 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 02:50:42 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 02:50:42 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 02:50:42 localhost systemd[1]: Reached target sshd-keygen.target. Feb 1 02:50:42 localhost systemd[1]: Starting OpenSSH server daemon... Feb 1 02:50:42 localhost sshd[45582]: main: sshd: ssh-rsa algorithm is disabled Feb 1 02:50:42 localhost systemd[1]: Started OpenSSH server daemon. Feb 1 02:50:42 localhost python3[45598]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:43 localhost python3[45616]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:44 localhost python3[45634]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:50:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 02:50:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3396 writes, 16K keys, 3396 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3396 writes, 201 syncs, 16.90 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3396 writes, 16K keys, 3396 commit groups, 1.0 writes per commit group, ingest: 15.30 MB, 0.03 MB/s#012Interval WAL: 3396 writes, 201 syncs, 16.90 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aabc1aa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aabc1aa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Feb 1 02:50:47 localhost python3[45683]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:47 localhost python3[45701]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:48 localhost python3[45731]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:50:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 02:50:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 14.62 MB, 0.02 MB/s#012Interval WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55797eeb82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55797eeb82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Feb 1 02:50:50 localhost python3[45781]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:50 localhost python3[45799]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:51 localhost python3[45829]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:50:51 localhost systemd[1]: Reloading. Feb 1 02:50:51 localhost systemd-rc-local-generator[45853]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:50:51 localhost systemd-sysv-generator[45858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:50:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:50:51 localhost systemd[1]: Starting chronyd online sources service... Feb 1 02:50:51 localhost chronyc[45870]: 200 OK Feb 1 02:50:51 localhost systemd[1]: chrony-online.service: Deactivated successfully. Feb 1 02:50:51 localhost systemd[1]: Finished chronyd online sources service. Feb 1 02:50:52 localhost python3[45887]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:52 localhost chronyd[25933]: System clock was stepped by 0.000039 seconds Feb 1 02:50:52 localhost python3[45904]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:53 localhost python3[45921]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:53 localhost chronyd[25933]: System clock was stepped by 0.000000 seconds Feb 1 02:50:53 localhost python3[45938]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:54 localhost python3[46067]: ansible-timezone Invoked with name=UTC hwclock=None Feb 1 02:50:54 localhost systemd[1]: Starting Time & Date Service... Feb 1 02:50:54 localhost systemd[1]: Started Time & Date Service. Feb 1 02:50:55 localhost python3[46150]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:55 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 02:50:55 localhost python3[46167]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:50:56 localhost python3[46185]: ansible-slurp Invoked with src=/etc/tuned/active_profile Feb 1 02:50:56 localhost python3[46201]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:50:57 localhost python3[46217]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:57 localhost python3[46233]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:50:57 localhost python3[46281]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:58 localhost python3[46324]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932257.6321523-78542-211405516631821/source _original_basename=tmp6318g3kl follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:58 localhost python3[46386]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:50:59 localhost python3[46429]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932258.5061057-78600-136063128894773/source _original_basename=tmplligh94w follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:50:59 localhost python3[46459]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 1 02:50:59 localhost systemd[1]: Reloading. Feb 1 02:50:59 localhost systemd-rc-local-generator[46490]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:50:59 localhost systemd-sysv-generator[46493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:50:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:51:00 localhost python3[46512]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:51:00 localhost python3[46528]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:51:01 localhost python3[46545]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:51:01 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Feb 1 02:51:01 localhost python3[46562]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:51:01 localhost python3[46578]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:51:02 localhost python3[46626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:51:02 localhost python3[46669]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932261.8845768-78805-31797608341399/source _original_basename=tmpnns37bp3 follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:51:24 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 1 02:51:28 localhost python3[46780]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 1 02:51:28 localhost python3[46796]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Feb 1 02:51:28 localhost python3[46812]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 02:51:29 localhost python3[46828]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:51:29 localhost python3[46844]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:51:29 localhost python3[46860]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 1 02:51:30 localhost kernel: SELinux: Converting 2705 SID table entries... Feb 1 02:51:30 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 02:51:30 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 02:51:30 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 02:51:30 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 02:51:30 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 02:51:30 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 02:51:30 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 02:51:31 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=13 res=1 Feb 1 02:51:31 localhost python3[46881]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:51:33 localhost python3[47018]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Feb 1 02:51:33 localhost rsyslogd[760]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Feb 1 02:51:33 localhost python3[47034]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 02:51:34 localhost python3[47050]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 1 02:51:34 localhost python3[47066]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n -iNONE', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Feb 1 02:51:39 localhost python3[47114]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:51:39 localhost python3[47157]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932299.1210291-80419-20899947941990/source _original_basename=tmpxinz7x4m follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:51:40 localhost python3[47187]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:51:40 localhost systemd[35763]: Created slice User Background Tasks Slice. Feb 1 02:51:40 localhost systemd[35763]: Starting Cleanup of User's Temporary Files and Directories... Feb 1 02:51:40 localhost systemd[35763]: Finished Cleanup of User's Temporary Files and Directories. Feb 1 02:51:42 localhost python3[47311]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 02:51:43 localhost python3[47432]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 1 02:51:46 localhost python3[47448]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:51:47 localhost python3[47465]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 02:51:51 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Feb 1 02:51:51 localhost dbus-broker-launch[14398]: Noticed file-system modification, trigger reload. Feb 1 02:51:51 localhost dbus-broker-launch[14398]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Feb 1 02:51:51 localhost dbus-broker-launch[14398]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Feb 1 02:51:51 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Feb 1 02:51:51 localhost systemd[1]: Reexecuting. Feb 1 02:51:51 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 1 02:51:51 localhost systemd[1]: Detected virtualization kvm. Feb 1 02:51:51 localhost systemd[1]: Detected architecture x86-64. Feb 1 02:51:51 localhost systemd-rc-local-generator[47519]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:51:51 localhost systemd-sysv-generator[47522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:51:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:51:59 localhost kernel: SELinux: Converting 2705 SID table entries... Feb 1 02:51:59 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 02:51:59 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 02:51:59 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 02:51:59 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 02:51:59 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 02:51:59 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 02:51:59 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 02:51:59 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Feb 1 02:51:59 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=14 res=1 Feb 1 02:51:59 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Feb 1 02:52:01 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 02:52:01 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 02:52:01 localhost systemd[1]: Reloading. Feb 1 02:52:01 localhost systemd-rc-local-generator[47613]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:52:01 localhost systemd-sysv-generator[47620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:52:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:52:01 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 02:52:01 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 02:52:01 localhost systemd-journald[619]: Journal stopped Feb 1 02:52:01 localhost systemd[1]: Stopping Journal Service... Feb 1 02:52:01 localhost systemd-journald[619]: Received SIGTERM from PID 1 (systemd). Feb 1 02:52:01 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 1 02:52:01 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Feb 1 02:52:01 localhost systemd[1]: Stopped Journal Service. Feb 1 02:52:01 localhost systemd[1]: systemd-journald.service: Consumed 1.714s CPU time. Feb 1 02:52:01 localhost systemd[1]: Starting Journal Service... Feb 1 02:52:01 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 1 02:52:01 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 1 02:52:01 localhost systemd[1]: systemd-udevd.service: Consumed 2.848s CPU time. Feb 1 02:52:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 1 02:52:01 localhost systemd-journald[47940]: Journal started Feb 1 02:52:01 localhost systemd-journald[47940]: Runtime Journal (/run/log/journal/00836dadc27b01f9fb0a211cca69e688) is 12.2M, max 314.7M, 302.5M free. Feb 1 02:52:01 localhost systemd[1]: Started Journal Service. Feb 1 02:52:01 localhost systemd-journald[47940]: Field hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Feb 1 02:52:01 localhost systemd-journald[47940]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 02:52:01 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 02:52:01 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 02:52:01 localhost systemd-udevd[47944]: Using default interface naming scheme 'rhel-9.0'. Feb 1 02:52:01 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 1 02:52:01 localhost systemd[1]: Reloading. Feb 1 02:52:01 localhost systemd-rc-local-generator[48530]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:52:01 localhost systemd-sysv-generator[48534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:52:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:52:01 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 02:52:02 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 02:52:02 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 02:52:02 localhost systemd[1]: man-db-cache-update.service: Consumed 1.336s CPU time. Feb 1 02:52:02 localhost systemd[1]: run-r35a3ea8843aa4d688f7a7b87ffa4fd4f.service: Deactivated successfully. Feb 1 02:52:02 localhost systemd[1]: run-rcddb9faf836a4d5baf50b47767ff6a4d.service: Deactivated successfully. Feb 1 02:52:03 localhost python3[48955]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Feb 1 02:52:04 localhost python3[48974]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 02:52:05 localhost python3[48992]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:52:05 localhost python3[48992]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Feb 1 02:52:05 localhost python3[48992]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Feb 1 02:52:12 localhost podman[49004]: 2026-02-01 07:52:05.215048027 +0000 UTC m=+0.043943973 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 1 02:52:12 localhost python3[48992]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 591bb9fb46a70e9f840f28502388406078442df6b6701a3c17990ee75e333673 --format json Feb 1 02:52:12 localhost python3[49105]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:52:12 localhost python3[49105]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Feb 1 02:52:12 localhost python3[49105]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Feb 1 02:52:20 localhost podman[49118]: 2026-02-01 07:52:12.795696836 +0000 UTC m=+0.042923001 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 1 02:52:20 localhost python3[49105]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d59b33e7fb841c47a47a12b18fb68b11debd968b4596c63f3177ecc7400fb1bc --format json Feb 1 02:52:20 localhost python3[49220]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:52:20 localhost python3[49220]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Feb 1 02:52:20 localhost python3[49220]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Feb 1 02:52:29 localhost podman[49544]: 2026-02-01 07:52:29.020878774 +0000 UTC m=+0.062107022 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, io.buildah.version=1.41.4, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 02:52:29 localhost podman[49544]: 2026-02-01 07:52:29.141548033 +0000 UTC m=+0.182776271 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, com.redhat.component=rhceph-container) Feb 1 02:52:36 localhost podman[49234]: 2026-02-01 07:52:20.592689317 +0000 UTC m=+0.041650582 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 02:52:36 localhost python3[49220]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 6eddd23e1e6adfbfa713a747123707c02f92ffdbf1913da92f171aba1d6d7856 --format json Feb 1 02:52:36 localhost python3[49934]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:52:36 localhost python3[49934]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Feb 1 02:52:36 localhost python3[49934]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Feb 1 02:52:50 localhost podman[49955]: 2026-02-01 07:52:36.908977849 +0000 UTC m=+0.046318627 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 1 02:52:50 localhost python3[49934]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 2c8610235afe953aa46efb141a5a988799548b22280d65a7e7ab21889422df37 --format json Feb 1 02:52:51 localhost python3[50291]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:52:51 localhost python3[50291]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Feb 1 02:52:51 localhost python3[50291]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Feb 1 02:52:57 localhost podman[50304]: 2026-02-01 07:52:51.523833728 +0000 UTC m=+0.045427520 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 1 02:52:57 localhost python3[50291]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9ab5aab6d0c3ec80926032b7acf4cec1d4710f1c2daccd17ae4daa64399ec237 --format json Feb 1 02:52:58 localhost python3[50395]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:52:58 localhost python3[50395]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Feb 1 02:52:58 localhost python3[50395]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Feb 1 02:53:02 localhost podman[50408]: 2026-02-01 07:52:58.163170149 +0000 UTC m=+0.030859274 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 1 02:53:02 localhost python3[50395]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 4853142d85dba3766b28d28ae195b26f7242230fe3646e9590a7aee2dc2e0dfa --format json Feb 1 02:53:02 localhost python3[50487]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:53:02 localhost python3[50487]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Feb 1 02:53:02 localhost python3[50487]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Feb 1 02:53:05 localhost podman[50500]: 2026-02-01 07:53:03.032631179 +0000 UTC m=+0.044520952 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 1 02:53:05 localhost python3[50487]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9ac6ea63c0fb4851145e847f9ced2f20804afc8472907b63a82d5866f5cf608a --format json Feb 1 02:53:05 localhost python3[50578]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:53:05 localhost python3[50578]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Feb 1 02:53:05 localhost python3[50578]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Feb 1 02:53:07 localhost podman[50591]: 2026-02-01 07:53:05.654741421 +0000 UTC m=+0.037856324 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 1 02:53:07 localhost python3[50578]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect ba1a08ea1c1207b471b1f02cee16ff456b8a812662cce16906d16de330a66d63 --format json Feb 1 02:53:08 localhost python3[50668]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:53:08 localhost python3[50668]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Feb 1 02:53:08 localhost python3[50668]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Feb 1 02:53:10 localhost podman[50681]: 2026-02-01 07:53:08.248677973 +0000 UTC m=+0.042281472 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Feb 1 02:53:10 localhost python3[50668]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 8576d3a17e57ea28f29435f132f583320941b5aa7bf0aa02e998b09a094d1fe8 --format json Feb 1 02:53:11 localhost python3[50760]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:53:11 localhost python3[50760]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Feb 1 02:53:11 localhost python3[50760]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Feb 1 02:53:14 localhost podman[50772]: 2026-02-01 07:53:11.141874692 +0000 UTC m=+0.042903580 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 1 02:53:14 localhost python3[50760]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 7fcbf63c0504494c8fcaa07583f909a06486472a0982aeac9554c6fdbeb04c9a --format json Feb 1 02:53:15 localhost python3[50864]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 1 02:53:15 localhost python3[50864]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Feb 1 02:53:15 localhost python3[50864]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Feb 1 02:53:17 localhost podman[50876]: 2026-02-01 07:53:15.15680301 +0000 UTC m=+0.043021645 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 1 02:53:17 localhost python3[50864]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 72ddf109f135b64d3116af7b84caaa358dc72e2e60f4c8753fa54fa65b76ba35 --format json Feb 1 02:53:17 localhost python3[50955]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:53:19 localhost ansible-async_wrapper.py[51128]: Invoked with 353297964674 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932398.9788837-83068-65418368192919/AnsiballZ_command.py _ Feb 1 02:53:19 localhost ansible-async_wrapper.py[51131]: Starting module and watcher Feb 1 02:53:19 localhost ansible-async_wrapper.py[51131]: Start watching 51132 (3600) Feb 1 02:53:19 localhost ansible-async_wrapper.py[51132]: Start module (51132) Feb 1 02:53:19 localhost ansible-async_wrapper.py[51128]: Return async_wrapper task started. Feb 1 02:53:20 localhost python3[51152]: ansible-ansible.legacy.async_status Invoked with jid=353297964674.51128 mode=status _async_dir=/tmp/.ansible_async Feb 1 02:53:23 localhost puppet-user[51136]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:23 localhost puppet-user[51136]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:23 localhost puppet-user[51136]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:23 localhost puppet-user[51136]: (file & line not available) Feb 1 02:53:23 localhost puppet-user[51136]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:23 localhost puppet-user[51136]: (file & line not available) Feb 1 02:53:23 localhost puppet-user[51136]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 1 02:53:23 localhost puppet-user[51136]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 1 02:53:23 localhost puppet-user[51136]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.12 seconds Feb 1 02:53:23 localhost puppet-user[51136]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Feb 1 02:53:23 localhost puppet-user[51136]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Feb 1 02:53:23 localhost puppet-user[51136]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Feb 1 02:53:23 localhost puppet-user[51136]: Notice: Applied catalog in 0.08 seconds Feb 1 02:53:23 localhost puppet-user[51136]: Application: Feb 1 02:53:23 localhost puppet-user[51136]: Initial environment: production Feb 1 02:53:23 localhost puppet-user[51136]: Converged environment: production Feb 1 02:53:23 localhost puppet-user[51136]: Run mode: user Feb 1 02:53:23 localhost puppet-user[51136]: Changes: Feb 1 02:53:23 localhost puppet-user[51136]: Total: 3 Feb 1 02:53:23 localhost puppet-user[51136]: Events: Feb 1 02:53:23 localhost puppet-user[51136]: Success: 3 Feb 1 02:53:23 localhost puppet-user[51136]: Total: 3 Feb 1 02:53:23 localhost puppet-user[51136]: Resources: Feb 1 02:53:23 localhost puppet-user[51136]: Changed: 3 Feb 1 02:53:23 localhost puppet-user[51136]: Out of sync: 3 Feb 1 02:53:23 localhost puppet-user[51136]: Total: 10 Feb 1 02:53:23 localhost puppet-user[51136]: Time: Feb 1 02:53:23 localhost puppet-user[51136]: Schedule: 0.00 Feb 1 02:53:23 localhost puppet-user[51136]: File: 0.00 Feb 1 02:53:23 localhost puppet-user[51136]: Exec: 0.01 Feb 1 02:53:23 localhost puppet-user[51136]: Augeas: 0.05 Feb 1 02:53:23 localhost puppet-user[51136]: Transaction evaluation: 0.08 Feb 1 02:53:23 localhost puppet-user[51136]: Catalog application: 0.08 Feb 1 02:53:23 localhost puppet-user[51136]: Config retrieval: 0.16 Feb 1 02:53:23 localhost puppet-user[51136]: Last run: 1769932403 Feb 1 02:53:23 localhost puppet-user[51136]: Filebucket: 0.00 Feb 1 02:53:23 localhost puppet-user[51136]: Total: 0.08 Feb 1 02:53:23 localhost puppet-user[51136]: Version: Feb 1 02:53:23 localhost puppet-user[51136]: Config: 1769932403 Feb 1 02:53:23 localhost puppet-user[51136]: Puppet: 7.10.0 Feb 1 02:53:23 localhost ansible-async_wrapper.py[51132]: Module complete (51132) Feb 1 02:53:24 localhost ansible-async_wrapper.py[51131]: Done in kid B. Feb 1 02:53:30 localhost python3[51279]: ansible-ansible.legacy.async_status Invoked with jid=353297964674.51128 mode=status _async_dir=/tmp/.ansible_async Feb 1 02:53:30 localhost python3[51295]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 02:53:31 localhost python3[51311]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:53:31 localhost python3[51359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:53:32 localhost python3[51402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932411.471384-83431-118320226345827/source _original_basename=tmpzp4p8yt5 follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 02:53:32 localhost python3[51432]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:53:33 localhost python3[51535]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 1 02:53:34 localhost python3[51554]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 02:53:34 localhost python3[51570]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005604215 step=1 update_config_hash_only=False Feb 1 02:53:35 localhost python3[51586]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:53:36 localhost python3[51602]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 1 02:53:36 localhost python3[51618]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Feb 1 02:53:38 localhost python3[51659]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Feb 1 02:53:38 localhost podman[51834]: 2026-02-01 07:53:38.363021799 +0000 UTC m=+0.109011335 container create 416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, config_id=tripleo_puppet_step1, url=https://www.redhat.com, release=1766032510, architecture=x86_64, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=container-puppet-collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 02:53:38 localhost podman[51834]: 2026-02-01 07:53:38.284575661 +0000 UTC m=+0.030565207 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 1 02:53:38 localhost podman[51852]: 2026-02-01 07:53:38.396858937 +0000 UTC m=+0.115232489 container create 11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, release=1766032510, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, container_name=container-puppet-metrics_qdr, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Feb 1 02:53:38 localhost systemd[1]: Started libpod-conmon-416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2.scope. Feb 1 02:53:38 localhost podman[51836]: 2026-02-01 07:53:38.303060709 +0000 UTC m=+0.041560106 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 02:53:38 localhost systemd[1]: Started libcrun container. Feb 1 02:53:38 localhost podman[51836]: 2026-02-01 07:53:38.419954665 +0000 UTC m=+0.158454062 container create a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, container_name=container-puppet-nova_libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, release=1766032510, build-date=2026-01-12T23:31:49Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 02:53:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193d63b6dd9579507d9f1518ccbcb97a99c18e05e53fbccdc25e375b68ff02d6/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:38 localhost podman[51864]: 2026-02-01 07:53:38.426360395 +0000 UTC m=+0.137324001 container create e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=container-puppet-crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron) Feb 1 02:53:38 localhost systemd[1]: Started libpod-conmon-11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27.scope. Feb 1 02:53:38 localhost systemd[1]: Started libcrun container. Feb 1 02:53:38 localhost podman[51834]: 2026-02-01 07:53:38.436818487 +0000 UTC m=+0.182808013 container init 416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, container_name=container-puppet-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, distribution-scope=public, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.13) Feb 1 02:53:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fdf082ac490d270b07fd0f17cf89cd8bf1d13e0d604cb75e37ccaf54fab194/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:38 localhost podman[51855]: 2026-02-01 07:53:38.442923037 +0000 UTC m=+0.152778065 container create 1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_puppet_step1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, release=1766032510) Feb 1 02:53:38 localhost systemd[1]: Started libpod-conmon-a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb.scope. Feb 1 02:53:38 localhost podman[51834]: 2026-02-01 07:53:38.447983602 +0000 UTC m=+0.193973158 container start 416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, architecture=x86_64, container_name=container-puppet-collectd, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_puppet_step1) Feb 1 02:53:38 localhost podman[51834]: 2026-02-01 07:53:38.448157008 +0000 UTC m=+0.194146564 container attach 416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 02:53:38 localhost systemd[1]: Started libcrun container. Feb 1 02:53:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/892d1779a7f946097f73616f672cd69c2781ff491e090964134e591e5adb1a86/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:38 localhost podman[51864]: 2026-02-01 07:53:38.358506743 +0000 UTC m=+0.069470389 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 1 02:53:38 localhost podman[51852]: 2026-02-01 07:53:38.358223484 +0000 UTC m=+0.076597056 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 1 02:53:38 localhost podman[51855]: 2026-02-01 07:53:38.36158826 +0000 UTC m=+0.071443368 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 1 02:53:38 localhost podman[51836]: 2026-02-01 07:53:38.462453902 +0000 UTC m=+0.200953299 container init a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=container-puppet-nova_libvirt, version=17.1.13, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 1 02:53:38 localhost podman[51836]: 2026-02-01 07:53:38.479487979 +0000 UTC m=+0.217987366 container start a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., container_name=container-puppet-nova_libvirt, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.5, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 02:53:38 localhost podman[51836]: 2026-02-01 07:53:38.479709047 +0000 UTC m=+0.218208444 container attach a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, container_name=container-puppet-nova_libvirt, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z) Feb 1 02:53:39 localhost systemd[1]: Started libpod-conmon-e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219.scope. Feb 1 02:53:39 localhost systemd[1]: Started libpod-conmon-1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9.scope. Feb 1 02:53:39 localhost systemd[1]: Started libcrun container. Feb 1 02:53:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd9ea2ebfbeb4119560e74e5b0456fd618118c9f72a7ecf288a55a3e1a95413/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:39 localhost systemd[1]: Started libcrun container. Feb 1 02:53:39 localhost podman[51852]: 2026-02-01 07:53:39.571388383 +0000 UTC m=+1.289761925 container init 11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, container_name=container-puppet-metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 02:53:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311e95bb3bdbe1bb40730cbc80ffa3861fcafc1265b18b49a2e8169fc5d3cbf1/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311e95bb3bdbe1bb40730cbc80ffa3861fcafc1265b18b49a2e8169fc5d3cbf1/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:39 localhost podman[51864]: 2026-02-01 07:53:39.579367259 +0000 UTC m=+1.290330905 container init e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_puppet_step1, container_name=container-puppet-crond, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git) Feb 1 02:53:39 localhost podman[51855]: 2026-02-01 07:53:39.584400162 +0000 UTC m=+1.294255210 container init 1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, version=17.1.13, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, container_name=container-puppet-iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 02:53:39 localhost podman[51864]: 2026-02-01 07:53:39.589973134 +0000 UTC m=+1.300936790 container start e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, container_name=container-puppet-crond, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron) Feb 1 02:53:39 localhost podman[51864]: 2026-02-01 07:53:39.590382408 +0000 UTC m=+1.301346054 container attach e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 02:53:39 localhost podman[51855]: 2026-02-01 07:53:39.5953464 +0000 UTC m=+1.305201458 container start 1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, container_name=container-puppet-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 02:53:39 localhost podman[51855]: 2026-02-01 07:53:39.595576428 +0000 UTC m=+1.305431486 container attach 1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, config_id=tripleo_puppet_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, version=17.1.13, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 1 02:53:39 localhost podman[51852]: 2026-02-01 07:53:39.632722161 +0000 UTC m=+1.351095693 container start 11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, container_name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_id=tripleo_puppet_step1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 02:53:39 localhost podman[51852]: 2026-02-01 07:53:39.633304141 +0000 UTC m=+1.351677703 container attach 11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, container_name=container-puppet-metrics_qdr, architecture=x86_64, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, tcib_managed=true, config_id=tripleo_puppet_step1) Feb 1 02:53:40 localhost podman[51732]: 2026-02-01 07:53:38.207676956 +0000 UTC m=+0.055603690 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Feb 1 02:53:40 localhost podman[52110]: 2026-02-01 07:53:40.626726464 +0000 UTC m=+0.085993529 container create 5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-central-container, release=1766032510, vcs-type=git, build-date=2026-01-12T23:07:24Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:24Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, container_name=container-puppet-ceilometer, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-central, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public) Feb 1 02:53:40 localhost systemd[1]: Started libpod-conmon-5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91.scope. Feb 1 02:53:40 localhost podman[52110]: 2026-02-01 07:53:40.57302674 +0000 UTC m=+0.032293815 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Feb 1 02:53:40 localhost systemd[1]: Started libcrun container. Feb 1 02:53:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c1697d3f9451811eabeba845d2774ca9523a4c1f6255791f262d42dbea547b/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:40 localhost podman[52110]: 2026-02-01 07:53:40.704052184 +0000 UTC m=+0.163319219 container init 5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, build-date=2026-01-12T23:07:24Z, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:24Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=container-puppet-ceilometer, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git) Feb 1 02:53:40 localhost podman[52110]: 2026-02-01 07:53:40.716583946 +0000 UTC m=+0.175851031 container start 5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_puppet_step1, build-date=2026-01-12T23:07:24Z, com.redhat.component=openstack-ceilometer-central-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, container_name=container-puppet-ceilometer, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:24Z, name=rhosp-rhel9/openstack-ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, url=https://www.redhat.com, version=17.1.13) Feb 1 02:53:40 localhost podman[52110]: 2026-02-01 07:53:40.720066797 +0000 UTC m=+0.179333832 container attach 5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, org.opencontainers.image.created=2026-01-12T23:07:24Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-central, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T23:07:24Z, container_name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-central-container) Feb 1 02:53:41 localhost ovs-vsctl[52167]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Feb 1 02:53:41 localhost puppet-user[52005]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:41 localhost puppet-user[52005]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:41 localhost puppet-user[52005]: (file & line not available) Feb 1 02:53:41 localhost puppet-user[52005]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:41 localhost puppet-user[52005]: (file & line not available) Feb 1 02:53:41 localhost puppet-user[52007]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:41 localhost puppet-user[52007]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:41 localhost puppet-user[52007]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:41 localhost puppet-user[52007]: (file & line not available) Feb 1 02:53:41 localhost systemd[1]: tmp-crun.l8tHPv.mount: Deactivated successfully. Feb 1 02:53:41 localhost puppet-user[52007]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:41 localhost puppet-user[52007]: (file & line not available) Feb 1 02:53:41 localhost puppet-user[52053]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:41 localhost puppet-user[52053]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:41 localhost puppet-user[52053]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:41 localhost puppet-user[52053]: (file & line not available) Feb 1 02:53:41 localhost puppet-user[52053]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:41 localhost puppet-user[52053]: (file & line not available) Feb 1 02:53:41 localhost puppet-user[52058]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:41 localhost puppet-user[52058]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:41 localhost puppet-user[52058]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:41 localhost puppet-user[52058]: (file & line not available) Feb 1 02:53:41 localhost puppet-user[52051]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:41 localhost puppet-user[52051]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:41 localhost puppet-user[52051]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:41 localhost puppet-user[52051]: (file & line not available) Feb 1 02:53:41 localhost puppet-user[52053]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.10 seconds Feb 1 02:53:41 localhost puppet-user[52058]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:41 localhost puppet-user[52058]: (file & line not available) Feb 1 02:53:41 localhost puppet-user[52058]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.08 seconds Feb 1 02:53:41 localhost puppet-user[52051]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:41 localhost puppet-user[52051]: (file & line not available) Feb 1 02:53:41 localhost puppet-user[52053]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Feb 1 02:53:41 localhost puppet-user[52005]: in a future release. Use nova::cinder::os_region_name instead Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Feb 1 02:53:41 localhost puppet-user[52005]: in a future release. Use nova::cinder::catalog_info instead Feb 1 02:53:41 localhost puppet-user[52053]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Feb 1 02:53:41 localhost puppet-user[52058]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Feb 1 02:53:41 localhost puppet-user[52053]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Feb 1 02:53:41 localhost puppet-user[52058]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Feb 1 02:53:41 localhost puppet-user[52051]: Notice: Accepting previously invalid value for target type 'Integer' Feb 1 02:53:41 localhost puppet-user[52058]: Notice: Applied catalog in 0.04 seconds Feb 1 02:53:41 localhost puppet-user[52058]: Application: Feb 1 02:53:41 localhost puppet-user[52058]: Initial environment: production Feb 1 02:53:41 localhost puppet-user[52058]: Converged environment: production Feb 1 02:53:41 localhost puppet-user[52058]: Run mode: user Feb 1 02:53:41 localhost puppet-user[52058]: Changes: Feb 1 02:53:41 localhost puppet-user[52058]: Total: 2 Feb 1 02:53:41 localhost puppet-user[52058]: Events: Feb 1 02:53:41 localhost puppet-user[52058]: Success: 2 Feb 1 02:53:41 localhost puppet-user[52058]: Total: 2 Feb 1 02:53:41 localhost puppet-user[52058]: Resources: Feb 1 02:53:41 localhost puppet-user[52058]: Changed: 2 Feb 1 02:53:41 localhost puppet-user[52058]: Out of sync: 2 Feb 1 02:53:41 localhost puppet-user[52058]: Skipped: 7 Feb 1 02:53:41 localhost puppet-user[52058]: Total: 9 Feb 1 02:53:41 localhost puppet-user[52058]: Time: Feb 1 02:53:41 localhost puppet-user[52058]: File: 0.01 Feb 1 02:53:41 localhost puppet-user[52058]: Cron: 0.01 Feb 1 02:53:41 localhost puppet-user[52058]: Transaction evaluation: 0.04 Feb 1 02:53:41 localhost puppet-user[52058]: Catalog application: 0.04 Feb 1 02:53:41 localhost puppet-user[52058]: Config retrieval: 0.11 Feb 1 02:53:41 localhost puppet-user[52058]: Last run: 1769932421 Feb 1 02:53:41 localhost puppet-user[52058]: Total: 0.04 Feb 1 02:53:41 localhost puppet-user[52058]: Version: Feb 1 02:53:41 localhost puppet-user[52058]: Config: 1769932421 Feb 1 02:53:41 localhost puppet-user[52058]: Puppet: 7.10.0 Feb 1 02:53:41 localhost puppet-user[52051]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.17 seconds Feb 1 02:53:41 localhost puppet-user[52051]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Feb 1 02:53:41 localhost puppet-user[52051]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Feb 1 02:53:41 localhost puppet-user[52051]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Feb 1 02:53:41 localhost puppet-user[52051]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Feb 1 02:53:41 localhost puppet-user[52051]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}2bddea0b0fe879e5c10f974fc2b3f9b8f40891a25c67e254d0f4c621edb22ff7' Feb 1 02:53:41 localhost puppet-user[52051]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Feb 1 02:53:41 localhost puppet-user[52051]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Feb 1 02:53:41 localhost puppet-user[52051]: Notice: Applied catalog in 0.03 seconds Feb 1 02:53:41 localhost puppet-user[52051]: Application: Feb 1 02:53:41 localhost puppet-user[52051]: Initial environment: production Feb 1 02:53:41 localhost puppet-user[52051]: Converged environment: production Feb 1 02:53:41 localhost puppet-user[52051]: Run mode: user Feb 1 02:53:41 localhost puppet-user[52051]: Changes: Feb 1 02:53:41 localhost puppet-user[52051]: Total: 7 Feb 1 02:53:41 localhost puppet-user[52051]: Events: Feb 1 02:53:41 localhost puppet-user[52051]: Success: 7 Feb 1 02:53:41 localhost puppet-user[52051]: Total: 7 Feb 1 02:53:41 localhost puppet-user[52051]: Resources: Feb 1 02:53:41 localhost puppet-user[52051]: Skipped: 13 Feb 1 02:53:41 localhost puppet-user[52051]: Changed: 5 Feb 1 02:53:41 localhost puppet-user[52051]: Out of sync: 5 Feb 1 02:53:41 localhost puppet-user[52051]: Total: 20 Feb 1 02:53:41 localhost puppet-user[52051]: Time: Feb 1 02:53:41 localhost puppet-user[52051]: File: 0.01 Feb 1 02:53:41 localhost puppet-user[52051]: Transaction evaluation: 0.03 Feb 1 02:53:41 localhost puppet-user[52051]: Catalog application: 0.03 Feb 1 02:53:41 localhost puppet-user[52051]: Config retrieval: 0.22 Feb 1 02:53:41 localhost puppet-user[52051]: Last run: 1769932421 Feb 1 02:53:41 localhost puppet-user[52051]: Total: 0.03 Feb 1 02:53:41 localhost puppet-user[52051]: Version: Feb 1 02:53:41 localhost puppet-user[52051]: Config: 1769932421 Feb 1 02:53:41 localhost puppet-user[52051]: Puppet: 7.10.0 Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Feb 1 02:53:41 localhost puppet-user[52007]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.42 seconds Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Feb 1 02:53:41 localhost systemd[1]: libpod-e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219.scope: Deactivated successfully. Feb 1 02:53:41 localhost systemd[1]: libpod-e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219.scope: Consumed 2.095s CPU time. Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Feb 1 02:53:41 localhost puppet-user[52005]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Feb 1 02:53:41 localhost systemd[1]: libpod-11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27.scope: Deactivated successfully. Feb 1 02:53:41 localhost systemd[1]: libpod-11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27.scope: Consumed 2.230s CPU time. Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Feb 1 02:53:41 localhost podman[52522]: 2026-02-01 07:53:41.980965194 +0000 UTC m=+0.061843926 container died e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, container_name=container-puppet-crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-type=git) Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}8dd3769945b86c38433504b97f7851a931eb3c94b667298d10a9796a3d020595' Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Feb 1 02:53:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219-userdata-shm.mount: Deactivated successfully. Feb 1 02:53:41 localhost systemd[1]: var-lib-containers-storage-overlay-4fd9ea2ebfbeb4119560e74e5b0456fd618118c9f72a7ecf288a55a3e1a95413-merged.mount: Deactivated successfully. Feb 1 02:53:41 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Feb 1 02:53:42 localhost puppet-user[52053]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Feb 1 02:53:42 localhost puppet-user[52053]: Notice: Applied catalog in 0.52 seconds Feb 1 02:53:42 localhost puppet-user[52053]: Application: Feb 1 02:53:42 localhost puppet-user[52053]: Initial environment: production Feb 1 02:53:42 localhost puppet-user[52053]: Converged environment: production Feb 1 02:53:42 localhost puppet-user[52053]: Run mode: user Feb 1 02:53:42 localhost puppet-user[52053]: Changes: Feb 1 02:53:42 localhost puppet-user[52053]: Total: 4 Feb 1 02:53:42 localhost puppet-user[52053]: Events: Feb 1 02:53:42 localhost puppet-user[52053]: Success: 4 Feb 1 02:53:42 localhost puppet-user[52053]: Total: 4 Feb 1 02:53:42 localhost puppet-user[52053]: Resources: Feb 1 02:53:42 localhost puppet-user[52053]: Changed: 4 Feb 1 02:53:42 localhost puppet-user[52053]: Out of sync: 4 Feb 1 02:53:42 localhost puppet-user[52053]: Skipped: 8 Feb 1 02:53:42 localhost puppet-user[52053]: Total: 13 Feb 1 02:53:42 localhost puppet-user[52053]: Time: Feb 1 02:53:42 localhost puppet-user[52053]: File: 0.00 Feb 1 02:53:42 localhost puppet-user[52053]: Exec: 0.05 Feb 1 02:53:42 localhost puppet-user[52053]: Config retrieval: 0.13 Feb 1 02:53:42 localhost puppet-user[52053]: Augeas: 0.45 Feb 1 02:53:42 localhost puppet-user[52053]: Transaction evaluation: 0.51 Feb 1 02:53:42 localhost puppet-user[52053]: Catalog application: 0.52 Feb 1 02:53:42 localhost puppet-user[52053]: Last run: 1769932422 Feb 1 02:53:42 localhost puppet-user[52053]: Total: 0.52 Feb 1 02:53:42 localhost puppet-user[52053]: Version: Feb 1 02:53:42 localhost puppet-user[52053]: Config: 1769932421 Feb 1 02:53:42 localhost puppet-user[52053]: Puppet: 7.10.0 Feb 1 02:53:42 localhost podman[51852]: 2026-02-01 07:53:42.025569894 +0000 UTC m=+3.743943516 container died 11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=container-puppet-metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Feb 1 02:53:42 localhost podman[52522]: 2026-02-01 07:53:42.034983278 +0000 UTC m=+0.115861910 container cleanup e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_puppet_step1, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Feb 1 02:53:42 localhost systemd[1]: libpod-conmon-e1a6e99a8ee469635366e7a4cea70fd0827e6234b5619dd2b41de273d6a79219.scope: Deactivated successfully. Feb 1 02:53:42 localhost python3[51659]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005604215 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Feb 1 02:53:42 localhost podman[52543]: 2026-02-01 07:53:42.098264053 +0000 UTC m=+0.111529481 container cleanup 11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=container-puppet-metrics_qdr, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13) Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Feb 1 02:53:42 localhost puppet-user[52007]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Feb 1 02:53:42 localhost systemd[1]: libpod-conmon-11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27.scope: Deactivated successfully. Feb 1 02:53:42 localhost python3[51659]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005604215 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 1 02:53:42 localhost puppet-user[52007]: Notice: Applied catalog in 0.31 seconds Feb 1 02:53:42 localhost puppet-user[52007]: Application: Feb 1 02:53:42 localhost puppet-user[52007]: Initial environment: production Feb 1 02:53:42 localhost puppet-user[52007]: Converged environment: production Feb 1 02:53:42 localhost puppet-user[52007]: Run mode: user Feb 1 02:53:42 localhost puppet-user[52007]: Changes: Feb 1 02:53:42 localhost puppet-user[52007]: Total: 43 Feb 1 02:53:42 localhost puppet-user[52007]: Events: Feb 1 02:53:42 localhost puppet-user[52007]: Success: 43 Feb 1 02:53:42 localhost puppet-user[52007]: Total: 43 Feb 1 02:53:42 localhost puppet-user[52007]: Resources: Feb 1 02:53:42 localhost puppet-user[52007]: Skipped: 14 Feb 1 02:53:42 localhost puppet-user[52007]: Changed: 38 Feb 1 02:53:42 localhost puppet-user[52007]: Out of sync: 38 Feb 1 02:53:42 localhost puppet-user[52007]: Total: 82 Feb 1 02:53:42 localhost puppet-user[52007]: Time: Feb 1 02:53:42 localhost puppet-user[52007]: Concat fragment: 0.00 Feb 1 02:53:42 localhost puppet-user[52007]: Concat file: 0.00 Feb 1 02:53:42 localhost puppet-user[52007]: File: 0.14 Feb 1 02:53:42 localhost puppet-user[52007]: Transaction evaluation: 0.30 Feb 1 02:53:42 localhost puppet-user[52007]: Catalog application: 0.31 Feb 1 02:53:42 localhost puppet-user[52007]: Config retrieval: 0.49 Feb 1 02:53:42 localhost puppet-user[52007]: Last run: 1769932422 Feb 1 02:53:42 localhost puppet-user[52007]: Total: 0.31 Feb 1 02:53:42 localhost puppet-user[52007]: Version: Feb 1 02:53:42 localhost puppet-user[52007]: Config: 1769932421 Feb 1 02:53:42 localhost puppet-user[52007]: Puppet: 7.10.0 Feb 1 02:53:42 localhost systemd[1]: libpod-1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9.scope: Deactivated successfully. Feb 1 02:53:42 localhost systemd[1]: libpod-1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9.scope: Consumed 2.490s CPU time. Feb 1 02:53:42 localhost podman[51855]: 2026-02-01 07:53:42.299597393 +0000 UTC m=+4.009452441 container died 1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=container-puppet-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Feb 1 02:53:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9-userdata-shm.mount: Deactivated successfully. Feb 1 02:53:42 localhost systemd[1]: var-lib-containers-storage-overlay-45fdf082ac490d270b07fd0f17cf89cd8bf1d13e0d604cb75e37ccaf54fab194-merged.mount: Deactivated successfully. Feb 1 02:53:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11651b540d5228a8dc09a1dd29082efa8101b78da1dee1533bd26f2650e9cd27-userdata-shm.mount: Deactivated successfully. Feb 1 02:53:42 localhost systemd[1]: var-lib-containers-storage-overlay-311e95bb3bdbe1bb40730cbc80ffa3861fcafc1265b18b49a2e8169fc5d3cbf1-merged.mount: Deactivated successfully. Feb 1 02:53:42 localhost podman[52689]: 2026-02-01 07:53:42.402692022 +0000 UTC m=+0.091157938 container cleanup 1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_puppet_step1, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, container_name=container-puppet-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid) Feb 1 02:53:42 localhost systemd[1]: libpod-conmon-1f16e0e1e2e3a2dd4b5e420bfd21338a28394069efeefa12bd8c15ddab6dbcb9.scope: Deactivated successfully. Feb 1 02:53:42 localhost python3[51659]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005604215 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 1 02:53:42 localhost podman[52722]: 2026-02-01 07:53:42.426260166 +0000 UTC m=+0.062725907 container create 734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, container_name=container-puppet-rsyslog, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T22:10:09Z, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 02:53:42 localhost systemd[1]: Started libpod-conmon-734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035.scope. Feb 1 02:53:42 localhost systemd[1]: Started libcrun container. Feb 1 02:53:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1783ac4e59af83bfa6c705cb913a4e3f5e5d835b34fd8ada82ce7a661d9e5a58/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:42 localhost systemd[1]: libpod-416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2.scope: Deactivated successfully. Feb 1 02:53:42 localhost systemd[1]: libpod-416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2.scope: Consumed 2.744s CPU time. Feb 1 02:53:42 localhost podman[52722]: 2026-02-01 07:53:42.397516313 +0000 UTC m=+0.033982064 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 1 02:53:42 localhost puppet-user[52005]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 1.24 seconds Feb 1 02:53:42 localhost podman[52755]: 2026-02-01 07:53:42.469895091 +0000 UTC m=+0.046051370 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 1 02:53:42 localhost podman[52722]: 2026-02-01 07:53:42.574860105 +0000 UTC m=+0.211325876 container init 734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp-rhel9/openstack-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T22:10:09Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=container-puppet-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510) Feb 1 02:53:42 localhost podman[52722]: 2026-02-01 07:53:42.583058919 +0000 UTC m=+0.219524660 container start 734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, build-date=2026-01-12T22:10:09Z, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=container-puppet-rsyslog, vcs-type=git, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:09Z, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog) Feb 1 02:53:42 localhost podman[52722]: 2026-02-01 07:53:42.5836757 +0000 UTC m=+0.220141511 container attach 734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, build-date=2026-01-12T22:10:09Z, url=https://www.redhat.com, release=1766032510, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, container_name=container-puppet-rsyslog, version=17.1.13, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 02:53:42 localhost podman[52755]: 2026-02-01 07:53:42.604040512 +0000 UTC m=+0.180196781 container create 5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_puppet_step1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, container_name=container-puppet-ovn_controller, release=1766032510, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-type=git, io.buildah.version=1.41.5, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 1 02:53:42 localhost podman[51834]: 2026-02-01 07:53:42.606332812 +0000 UTC m=+4.352322368 container died 416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.5, url=https://www.redhat.com, container_name=container-puppet-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_puppet_step1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd) Feb 1 02:53:42 localhost podman[52809]: 2026-02-01 07:53:42.640543452 +0000 UTC m=+0.079417212 container cleanup 416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, container_name=container-puppet-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1) Feb 1 02:53:42 localhost systemd[1]: libpod-conmon-416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2.scope: Deactivated successfully. Feb 1 02:53:42 localhost python3[51659]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005604215 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 1 02:53:42 localhost systemd[1]: Started libpod-conmon-5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f.scope. Feb 1 02:53:42 localhost systemd[1]: Started libcrun container. Feb 1 02:53:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f62902336a91aed0e6d89cda1611500b3d6fe7b4bddf84b8ce31199c37cfaf6/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f62902336a91aed0e6d89cda1611500b3d6fe7b4bddf84b8ce31199c37cfaf6/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:42 localhost podman[52755]: 2026-02-01 07:53:42.723248448 +0000 UTC m=+0.299404737 container init 5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=container-puppet-ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, release=1766032510, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 02:53:42 localhost podman[52755]: 2026-02-01 07:53:42.732784187 +0000 UTC m=+0.308940456 container start 5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., container_name=container-puppet-ovn_controller, vcs-type=git, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, release=1766032510, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller) Feb 1 02:53:42 localhost podman[52755]: 2026-02-01 07:53:42.733956578 +0000 UTC m=+0.310112857 container attach 5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_puppet_step1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=container-puppet-ovn_controller) Feb 1 02:53:42 localhost puppet-user[52005]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}41fd6c6f800884fc5582fcd6978c5fdf9efd895ea286512b024eb4dc5635dca8' Feb 1 02:53:42 localhost puppet-user[52005]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Feb 1 02:53:42 localhost puppet-user[52005]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Feb 1 02:53:42 localhost puppet-user[52005]: Warning: Empty environment setting 'TLS_PASSWORD' Feb 1 02:53:42 localhost puppet-user[52005]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Feb 1 02:53:42 localhost puppet-user[52005]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Feb 1 02:53:42 localhost puppet-user[52005]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}ebc7fc3dcb9777cbffecb2db809cb7f56024c1a98bdd34554dbaaa8469bb0cdf' Feb 1 02:53:42 localhost puppet-user[52005]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Feb 1 02:53:42 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Feb 1 02:53:42 localhost puppet-user[52157]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:42 localhost puppet-user[52157]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:42 localhost puppet-user[52157]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:42 localhost puppet-user[52157]: (file & line not available) Feb 1 02:53:42 localhost puppet-user[52157]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:42 localhost puppet-user[52157]: (file & line not available) Feb 1 02:53:42 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Feb 1 02:53:42 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Feb 1 02:53:43 localhost puppet-user[52157]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.38 seconds Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Feb 1 02:53:43 localhost systemd[1]: var-lib-containers-storage-overlay-193d63b6dd9579507d9f1518ccbcb97a99c18e05e53fbccdc25e375b68ff02d6-merged.mount: Deactivated successfully. Feb 1 02:53:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-416498f422383064ad86a44268caa4d4714236e931b56a3198ace870ec017bf2-userdata-shm.mount: Deactivated successfully. Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Feb 1 02:53:43 localhost puppet-user[52157]: Notice: Applied catalog in 0.50 seconds Feb 1 02:53:43 localhost puppet-user[52157]: Application: Feb 1 02:53:43 localhost puppet-user[52157]: Initial environment: production Feb 1 02:53:43 localhost puppet-user[52157]: Converged environment: production Feb 1 02:53:43 localhost puppet-user[52157]: Run mode: user Feb 1 02:53:43 localhost puppet-user[52157]: Changes: Feb 1 02:53:43 localhost puppet-user[52157]: Total: 31 Feb 1 02:53:43 localhost puppet-user[52157]: Events: Feb 1 02:53:43 localhost puppet-user[52157]: Success: 31 Feb 1 02:53:43 localhost puppet-user[52157]: Total: 31 Feb 1 02:53:43 localhost puppet-user[52157]: Resources: Feb 1 02:53:43 localhost puppet-user[52157]: Skipped: 22 Feb 1 02:53:43 localhost puppet-user[52157]: Changed: 31 Feb 1 02:53:43 localhost puppet-user[52157]: Out of sync: 31 Feb 1 02:53:43 localhost puppet-user[52157]: Total: 151 Feb 1 02:53:43 localhost puppet-user[52157]: Time: Feb 1 02:53:43 localhost puppet-user[52157]: Package: 0.03 Feb 1 02:53:43 localhost puppet-user[52157]: Ceilometer config: 0.39 Feb 1 02:53:43 localhost puppet-user[52157]: Config retrieval: 0.45 Feb 1 02:53:43 localhost puppet-user[52157]: Transaction evaluation: 0.49 Feb 1 02:53:43 localhost puppet-user[52157]: Catalog application: 0.50 Feb 1 02:53:43 localhost puppet-user[52157]: Last run: 1769932423 Feb 1 02:53:43 localhost puppet-user[52157]: Resources: 0.00 Feb 1 02:53:43 localhost puppet-user[52157]: Total: 0.50 Feb 1 02:53:43 localhost puppet-user[52157]: Version: Feb 1 02:53:43 localhost puppet-user[52157]: Config: 1769932422 Feb 1 02:53:43 localhost puppet-user[52157]: Puppet: 7.10.0 Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Feb 1 02:53:43 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Feb 1 02:53:44 localhost systemd[1]: libpod-5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91.scope: Deactivated successfully. Feb 1 02:53:44 localhost systemd[1]: libpod-5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91.scope: Consumed 3.169s CPU time. Feb 1 02:53:44 localhost podman[52110]: 2026-02-01 07:53:44.20678463 +0000 UTC m=+3.666051715 container died 5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:24Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:24Z, container_name=container-puppet-ceilometer, description=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, config_id=tripleo_puppet_step1, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Feb 1 02:53:44 localhost systemd[1]: tmp-crun.vEJE4z.mount: Deactivated successfully. Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Feb 1 02:53:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91-userdata-shm.mount: Deactivated successfully. Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Feb 1 02:53:44 localhost podman[53013]: 2026-02-01 07:53:44.332425478 +0000 UTC m=+0.116357198 container cleanup 5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:24Z, io.buildah.version=1.41.5, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:24Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-central-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=container-puppet-ceilometer, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc.) Feb 1 02:53:44 localhost systemd[1]: libpod-conmon-5254c7db874e6a04303dfb1bd6b2faa423e5c8db607a20f3689713c64f535b91.scope: Deactivated successfully. Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Feb 1 02:53:44 localhost python3[51659]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005604215 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Feb 1 02:53:44 localhost systemd[1]: var-lib-containers-storage-overlay-d8c1697d3f9451811eabeba845d2774ca9523a4c1f6255791f262d42dbea547b-merged.mount: Deactivated successfully. Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}f9d8b60f125f93c01d13e9bc67ee58f1fd06cc57ef5fbe63b5478e0790417593' Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Feb 1 02:53:44 localhost puppet-user[52892]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:44 localhost puppet-user[52892]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:44 localhost puppet-user[52892]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:44 localhost puppet-user[52892]: (file & line not available) Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Feb 1 02:53:44 localhost puppet-user[52835]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:44 localhost puppet-user[52835]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:44 localhost puppet-user[52835]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:44 localhost puppet-user[52835]: (file & line not available) Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Feb 1 02:53:44 localhost puppet-user[52892]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:44 localhost puppet-user[52892]: (file & line not available) Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Feb 1 02:53:44 localhost puppet-user[52835]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:44 localhost puppet-user[52835]: (file & line not available) Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Feb 1 02:53:44 localhost puppet-user[52892]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.24 seconds Feb 1 02:53:44 localhost puppet-user[52835]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.22 seconds Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Feb 1 02:53:44 localhost ovs-vsctl[53164]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Feb 1 02:53:44 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Feb 1 02:53:44 localhost ovs-vsctl[53166]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Feb 1 02:53:44 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Feb 1 02:53:44 localhost puppet-user[52835]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Feb 1 02:53:44 localhost ovs-vsctl[53168]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.108 Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Feb 1 02:53:44 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Feb 1 02:53:44 localhost puppet-user[52835]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Feb 1 02:53:44 localhost puppet-user[52835]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}d9ddd6486f0577337caa69e7107b3a4c217ac8a894483a5e6ed8bdfdb439e8bc' Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Feb 1 02:53:44 localhost puppet-user[52835]: Notice: Applied catalog in 0.11 seconds Feb 1 02:53:44 localhost puppet-user[52835]: Application: Feb 1 02:53:44 localhost puppet-user[52835]: Initial environment: production Feb 1 02:53:44 localhost puppet-user[52835]: Converged environment: production Feb 1 02:53:44 localhost puppet-user[52835]: Run mode: user Feb 1 02:53:44 localhost puppet-user[52835]: Changes: Feb 1 02:53:44 localhost puppet-user[52835]: Total: 3 Feb 1 02:53:44 localhost puppet-user[52835]: Events: Feb 1 02:53:44 localhost puppet-user[52835]: Success: 3 Feb 1 02:53:44 localhost puppet-user[52835]: Total: 3 Feb 1 02:53:44 localhost puppet-user[52835]: Resources: Feb 1 02:53:44 localhost puppet-user[52835]: Skipped: 11 Feb 1 02:53:44 localhost puppet-user[52835]: Changed: 3 Feb 1 02:53:44 localhost puppet-user[52835]: Out of sync: 3 Feb 1 02:53:44 localhost puppet-user[52835]: Total: 25 Feb 1 02:53:44 localhost puppet-user[52835]: Time: Feb 1 02:53:44 localhost puppet-user[52835]: Concat file: 0.00 Feb 1 02:53:44 localhost puppet-user[52835]: Concat fragment: 0.00 Feb 1 02:53:44 localhost puppet-user[52835]: File: 0.02 Feb 1 02:53:44 localhost puppet-user[52835]: Transaction evaluation: 0.10 Feb 1 02:53:44 localhost puppet-user[52835]: Catalog application: 0.11 Feb 1 02:53:44 localhost puppet-user[52835]: Config retrieval: 0.27 Feb 1 02:53:44 localhost puppet-user[52835]: Last run: 1769932424 Feb 1 02:53:44 localhost puppet-user[52835]: Total: 0.11 Feb 1 02:53:44 localhost puppet-user[52835]: Version: Feb 1 02:53:44 localhost puppet-user[52835]: Config: 1769932424 Feb 1 02:53:44 localhost puppet-user[52835]: Puppet: 7.10.0 Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Feb 1 02:53:44 localhost ovs-vsctl[53171]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005604215.localdomain Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Feb 1 02:53:44 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005604215.novalocal' to 'np0005604215.localdomain' Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Feb 1 02:53:44 localhost ovs-vsctl[53173]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Feb 1 02:53:44 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Feb 1 02:53:44 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Feb 1 02:53:44 localhost ovs-vsctl[53179]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Feb 1 02:53:44 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Feb 1 02:53:44 localhost ovs-vsctl[53183]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Feb 1 02:53:44 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Feb 1 02:53:44 localhost ovs-vsctl[53187]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Feb 1 02:53:45 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Feb 1 02:53:45 localhost ovs-vsctl[53190]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Feb 1 02:53:45 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Feb 1 02:53:45 localhost ovs-vsctl[53192]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Feb 1 02:53:45 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Feb 1 02:53:45 localhost ovs-vsctl[53195]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:44:83:a4 Feb 1 02:53:45 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Feb 1 02:53:45 localhost ovs-vsctl[53208]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Feb 1 02:53:45 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Feb 1 02:53:45 localhost ovs-vsctl[53212]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Feb 1 02:53:45 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Feb 1 02:53:45 localhost ovs-vsctl[53219]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Feb 1 02:53:45 localhost puppet-user[52892]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Feb 1 02:53:45 localhost systemd[1]: libpod-734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035.scope: Deactivated successfully. Feb 1 02:53:45 localhost systemd[1]: libpod-734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035.scope: Consumed 2.517s CPU time. Feb 1 02:53:45 localhost podman[52722]: 2026-02-01 07:53:45.200161023 +0000 UTC m=+2.836626794 container died 734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, release=1766032510, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=container-puppet-rsyslog, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:09Z, version=17.1.13, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, distribution-scope=public) Feb 1 02:53:45 localhost puppet-user[52892]: Notice: Applied catalog in 0.50 seconds Feb 1 02:53:45 localhost puppet-user[52892]: Application: Feb 1 02:53:45 localhost puppet-user[52892]: Initial environment: production Feb 1 02:53:45 localhost puppet-user[52892]: Converged environment: production Feb 1 02:53:45 localhost puppet-user[52892]: Run mode: user Feb 1 02:53:45 localhost puppet-user[52892]: Changes: Feb 1 02:53:45 localhost puppet-user[52892]: Total: 14 Feb 1 02:53:45 localhost puppet-user[52892]: Events: Feb 1 02:53:45 localhost puppet-user[52892]: Success: 14 Feb 1 02:53:45 localhost puppet-user[52892]: Total: 14 Feb 1 02:53:45 localhost puppet-user[52892]: Resources: Feb 1 02:53:45 localhost puppet-user[52892]: Skipped: 12 Feb 1 02:53:45 localhost puppet-user[52892]: Changed: 14 Feb 1 02:53:45 localhost puppet-user[52892]: Out of sync: 14 Feb 1 02:53:45 localhost puppet-user[52892]: Total: 29 Feb 1 02:53:45 localhost puppet-user[52892]: Time: Feb 1 02:53:45 localhost puppet-user[52892]: Exec: 0.02 Feb 1 02:53:45 localhost puppet-user[52892]: Config retrieval: 0.27 Feb 1 02:53:45 localhost puppet-user[52892]: Vs config: 0.44 Feb 1 02:53:45 localhost puppet-user[52892]: Transaction evaluation: 0.49 Feb 1 02:53:45 localhost puppet-user[52892]: Catalog application: 0.50 Feb 1 02:53:45 localhost puppet-user[52892]: Last run: 1769932425 Feb 1 02:53:45 localhost puppet-user[52892]: Total: 0.50 Feb 1 02:53:45 localhost puppet-user[52892]: Version: Feb 1 02:53:45 localhost puppet-user[52892]: Config: 1769932424 Feb 1 02:53:45 localhost puppet-user[52892]: Puppet: 7.10.0 Feb 1 02:53:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035-userdata-shm.mount: Deactivated successfully. Feb 1 02:53:45 localhost systemd[1]: var-lib-containers-storage-overlay-1783ac4e59af83bfa6c705cb913a4e3f5e5d835b34fd8ada82ce7a661d9e5a58-merged.mount: Deactivated successfully. Feb 1 02:53:45 localhost podman[53227]: 2026-02-01 07:53:45.306070179 +0000 UTC m=+0.092797434 container cleanup 734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, name=rhosp-rhel9/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, container_name=container-puppet-rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510) Feb 1 02:53:45 localhost systemd[1]: libpod-conmon-734d51f4d1bb1d3b860c53442f7df52821e3416d9570482daf00b7891c713035.scope: Deactivated successfully. Feb 1 02:53:45 localhost python3[51659]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005604215 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 1 02:53:45 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Feb 1 02:53:45 localhost systemd[1]: libpod-5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f.scope: Deactivated successfully. Feb 1 02:53:45 localhost systemd[1]: libpod-5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f.scope: Consumed 2.914s CPU time. Feb 1 02:53:45 localhost podman[52755]: 2026-02-01 07:53:45.702715561 +0000 UTC m=+3.278871860 container died 5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_puppet_step1, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, container_name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true) Feb 1 02:53:45 localhost systemd[1]: var-lib-containers-storage-overlay-5f62902336a91aed0e6d89cda1611500b3d6fe7b4bddf84b8ce31199c37cfaf6-merged.mount: Deactivated successfully. Feb 1 02:53:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f-userdata-shm.mount: Deactivated successfully. Feb 1 02:53:45 localhost podman[53300]: 2026-02-01 07:53:45.82373732 +0000 UTC m=+0.114620518 container cleanup 5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.13, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=container-puppet-ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 1 02:53:45 localhost systemd[1]: libpod-conmon-5bd5b1296ba45171c0a35cb096a0e87dd874626ccc80ef771650178655270e5f.scope: Deactivated successfully. Feb 1 02:53:45 localhost python3[51659]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005604215 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 1 02:53:45 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Feb 1 02:53:45 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Feb 1 02:53:45 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Feb 1 02:53:46 localhost puppet-user[52005]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}66a7ab6cc1a19ea5002a5aaa2cfb2f196778c89c859d0afac926fe3fac9c75a4' Feb 1 02:53:46 localhost puppet-user[52005]: Notice: Applied catalog in 4.08 seconds Feb 1 02:53:46 localhost puppet-user[52005]: Application: Feb 1 02:53:46 localhost puppet-user[52005]: Initial environment: production Feb 1 02:53:46 localhost puppet-user[52005]: Converged environment: production Feb 1 02:53:46 localhost puppet-user[52005]: Run mode: user Feb 1 02:53:46 localhost puppet-user[52005]: Changes: Feb 1 02:53:46 localhost puppet-user[52005]: Total: 183 Feb 1 02:53:46 localhost puppet-user[52005]: Events: Feb 1 02:53:46 localhost puppet-user[52005]: Success: 183 Feb 1 02:53:46 localhost puppet-user[52005]: Total: 183 Feb 1 02:53:46 localhost puppet-user[52005]: Resources: Feb 1 02:53:46 localhost puppet-user[52005]: Changed: 183 Feb 1 02:53:46 localhost puppet-user[52005]: Out of sync: 183 Feb 1 02:53:46 localhost puppet-user[52005]: Skipped: 57 Feb 1 02:53:46 localhost puppet-user[52005]: Total: 487 Feb 1 02:53:46 localhost puppet-user[52005]: Time: Feb 1 02:53:46 localhost puppet-user[52005]: Concat file: 0.00 Feb 1 02:53:46 localhost puppet-user[52005]: Concat fragment: 0.00 Feb 1 02:53:46 localhost puppet-user[52005]: Anchor: 0.00 Feb 1 02:53:46 localhost puppet-user[52005]: File line: 0.00 Feb 1 02:53:46 localhost puppet-user[52005]: Virtlogd config: 0.00 Feb 1 02:53:46 localhost puppet-user[52005]: Virtnodedevd config: 0.01 Feb 1 02:53:46 localhost puppet-user[52005]: Virtsecretd config: 0.01 Feb 1 02:53:46 localhost puppet-user[52005]: Virtqemud config: 0.01 Feb 1 02:53:46 localhost puppet-user[52005]: Exec: 0.01 Feb 1 02:53:46 localhost puppet-user[52005]: Virtstoraged config: 0.01 Feb 1 02:53:46 localhost puppet-user[52005]: Package: 0.02 Feb 1 02:53:46 localhost puppet-user[52005]: File: 0.02 Feb 1 02:53:46 localhost puppet-user[52005]: Virtproxyd config: 0.03 Feb 1 02:53:46 localhost puppet-user[52005]: Augeas: 0.91 Feb 1 02:53:46 localhost puppet-user[52005]: Config retrieval: 1.48 Feb 1 02:53:46 localhost puppet-user[52005]: Last run: 1769932426 Feb 1 02:53:46 localhost puppet-user[52005]: Nova config: 2.84 Feb 1 02:53:46 localhost puppet-user[52005]: Transaction evaluation: 4.07 Feb 1 02:53:46 localhost puppet-user[52005]: Catalog application: 4.08 Feb 1 02:53:46 localhost puppet-user[52005]: Resources: 0.00 Feb 1 02:53:46 localhost puppet-user[52005]: Total: 4.08 Feb 1 02:53:46 localhost puppet-user[52005]: Version: Feb 1 02:53:46 localhost puppet-user[52005]: Config: 1769932421 Feb 1 02:53:46 localhost puppet-user[52005]: Puppet: 7.10.0 Feb 1 02:53:47 localhost systemd[1]: libpod-a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb.scope: Deactivated successfully. Feb 1 02:53:47 localhost systemd[1]: libpod-a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb.scope: Consumed 8.111s CPU time. Feb 1 02:53:47 localhost podman[51836]: 2026-02-01 07:53:47.692069905 +0000 UTC m=+9.430569352 container died a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, version=17.1.13, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, container_name=container-puppet-nova_libvirt, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, distribution-scope=public, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 1 02:53:47 localhost systemd[1]: tmp-crun.JbqPYh.mount: Deactivated successfully. Feb 1 02:53:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb-userdata-shm.mount: Deactivated successfully. Feb 1 02:53:47 localhost systemd[1]: var-lib-containers-storage-overlay-892d1779a7f946097f73616f672cd69c2781ff491e090964134e591e5adb1a86-merged.mount: Deactivated successfully. Feb 1 02:53:47 localhost podman[53372]: 2026-02-01 07:53:47.819617399 +0000 UTC m=+0.117114975 container cleanup a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, container_name=container-puppet-nova_libvirt, distribution-scope=public, build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_puppet_step1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 1 02:53:47 localhost systemd[1]: libpod-conmon-a702590bfcf1f706e901ffe0737cf048f7386190369052664149dd19c5bfbecb.scope: Deactivated successfully. Feb 1 02:53:47 localhost python3[51659]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005604215 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 02:53:52 localhost podman[52893]: 2026-02-01 07:53:42.814258019 +0000 UTC m=+0.034796972 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Feb 1 02:53:52 localhost podman[53472]: 2026-02-01 07:53:52.667882905 +0000 UTC m=+0.102146038 container create 5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, build-date=2026-01-12T22:57:35Z, container_name=container-puppet-neutron, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:57:35Z, vcs-type=git, version=17.1.13, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-server, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-neutron-server-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 02:53:52 localhost podman[53472]: 2026-02-01 07:53:52.603368397 +0000 UTC m=+0.037631560 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Feb 1 02:53:52 localhost systemd[1]: Started libpod-conmon-5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19.scope. Feb 1 02:53:52 localhost systemd[1]: Started libcrun container. Feb 1 02:53:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef459e28ad8635c7a92e994211ce7b874f14e5a38aca9f947ab317c65716a008/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 1 02:53:52 localhost podman[53472]: 2026-02-01 07:53:52.785884158 +0000 UTC m=+0.220147301 container init 5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, version=17.1.13, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-server, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, container_name=container-puppet-neutron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T22:57:35Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-server-container, build-date=2026-01-12T22:57:35Z, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20260112.1) Feb 1 02:53:52 localhost podman[53472]: 2026-02-01 07:53:52.833985188 +0000 UTC m=+0.268248341 container start 5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, config_id=tripleo_puppet_step1, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:57:35Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=container-puppet-neutron, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-neutron-server-container, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:57:35Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-server) Feb 1 02:53:52 localhost podman[53472]: 2026-02-01 07:53:52.83490662 +0000 UTC m=+0.269169773 container attach 5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:57:35Z, io.openshift.expose-services=, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, build-date=2026-01-12T22:57:35Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-server, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-server-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_puppet_step1) Feb 1 02:53:54 localhost puppet-user[53502]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Feb 1 02:53:55 localhost puppet-user[53502]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 02:53:55 localhost puppet-user[53502]: (file: /etc/puppet/hiera.yaml) Feb 1 02:53:55 localhost puppet-user[53502]: Warning: Undefined variable '::deploy_config_name'; Feb 1 02:53:55 localhost puppet-user[53502]: (file & line not available) Feb 1 02:53:55 localhost puppet-user[53502]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 02:53:55 localhost puppet-user[53502]: (file & line not available) Feb 1 02:53:55 localhost puppet-user[53502]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Feb 1 02:53:55 localhost puppet-user[53502]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.61 seconds Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Feb 1 02:53:55 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Feb 1 02:53:56 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Feb 1 02:53:56 localhost puppet-user[53502]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Feb 1 02:53:56 localhost puppet-user[53502]: Notice: Applied catalog in 0.51 seconds Feb 1 02:53:56 localhost puppet-user[53502]: Application: Feb 1 02:53:56 localhost puppet-user[53502]: Initial environment: production Feb 1 02:53:56 localhost puppet-user[53502]: Converged environment: production Feb 1 02:53:56 localhost puppet-user[53502]: Run mode: user Feb 1 02:53:56 localhost puppet-user[53502]: Changes: Feb 1 02:53:56 localhost puppet-user[53502]: Total: 33 Feb 1 02:53:56 localhost puppet-user[53502]: Events: Feb 1 02:53:56 localhost puppet-user[53502]: Success: 33 Feb 1 02:53:56 localhost puppet-user[53502]: Total: 33 Feb 1 02:53:56 localhost puppet-user[53502]: Resources: Feb 1 02:53:56 localhost puppet-user[53502]: Skipped: 21 Feb 1 02:53:56 localhost puppet-user[53502]: Changed: 33 Feb 1 02:53:56 localhost puppet-user[53502]: Out of sync: 33 Feb 1 02:53:56 localhost puppet-user[53502]: Total: 155 Feb 1 02:53:56 localhost puppet-user[53502]: Time: Feb 1 02:53:56 localhost puppet-user[53502]: Resources: 0.00 Feb 1 02:53:56 localhost puppet-user[53502]: Ovn metadata agent config: 0.01 Feb 1 02:53:56 localhost puppet-user[53502]: Neutron config: 0.44 Feb 1 02:53:56 localhost puppet-user[53502]: Transaction evaluation: 0.50 Feb 1 02:53:56 localhost puppet-user[53502]: Catalog application: 0.51 Feb 1 02:53:56 localhost puppet-user[53502]: Config retrieval: 0.67 Feb 1 02:53:56 localhost puppet-user[53502]: Last run: 1769932436 Feb 1 02:53:56 localhost puppet-user[53502]: Total: 0.51 Feb 1 02:53:56 localhost puppet-user[53502]: Version: Feb 1 02:53:56 localhost puppet-user[53502]: Config: 1769932435 Feb 1 02:53:56 localhost puppet-user[53502]: Puppet: 7.10.0 Feb 1 02:53:56 localhost systemd[1]: libpod-5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19.scope: Deactivated successfully. Feb 1 02:53:56 localhost systemd[1]: libpod-5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19.scope: Consumed 3.668s CPU time. Feb 1 02:53:56 localhost podman[53472]: 2026-02-01 07:53:56.737535641 +0000 UTC m=+4.171798814 container died 5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:57:35Z, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:57:35Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-neutron-server, io.buildah.version=1.41.5, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-neutron-server-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, version=17.1.13) Feb 1 02:53:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19-userdata-shm.mount: Deactivated successfully. Feb 1 02:53:56 localhost systemd[1]: var-lib-containers-storage-overlay-ef459e28ad8635c7a92e994211ce7b874f14e5a38aca9f947ab317c65716a008-merged.mount: Deactivated successfully. Feb 1 02:53:56 localhost podman[53614]: 2026-02-01 07:53:56.884187094 +0000 UTC m=+0.137782357 container cleanup 5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, release=1766032510, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-server, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, com.redhat.component=openstack-neutron-server-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.created=2026-01-12T22:57:35Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=container-puppet-neutron, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:57:35Z, version=17.1.13) Feb 1 02:53:56 localhost systemd[1]: libpod-conmon-5031d7afe8cffd929b5248a1201872d9cbbd95beef630a2d68870efd023aaa19.scope: Deactivated successfully. Feb 1 02:53:56 localhost python3[51659]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005604215 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005604215', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Feb 1 02:53:57 localhost python3[53668]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:53:58 localhost python3[53700]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:53:59 localhost python3[53750]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:53:59 localhost python3[53793]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932438.858567-84168-192413286051373/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:53:59 localhost python3[53855]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:54:00 localhost python3[53898]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932439.686717-84168-123115628839968/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:54:00 localhost python3[53960]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:54:01 localhost python3[54003]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932440.6059961-84228-207305042306292/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:54:01 localhost python3[54065]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:54:02 localhost python3[54108]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932441.5017838-84259-12409701200441/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:54:02 localhost python3[54138]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:54:02 localhost systemd[1]: Reloading. Feb 1 02:54:02 localhost systemd-rc-local-generator[54159]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:54:02 localhost systemd-sysv-generator[54164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:54:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:54:03 localhost systemd[1]: Reloading. Feb 1 02:54:03 localhost systemd-rc-local-generator[54204]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:54:03 localhost systemd-sysv-generator[54207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:54:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:54:03 localhost systemd[1]: Starting TripleO Container Shutdown... Feb 1 02:54:03 localhost systemd[1]: Finished TripleO Container Shutdown. Feb 1 02:54:03 localhost python3[54262]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:54:04 localhost python3[54305]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932443.450053-84304-257668070714577/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:54:04 localhost python3[54367]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:54:04 localhost python3[54410]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932444.2926428-84324-204530046180919/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:54:05 localhost python3[54440]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:54:05 localhost systemd[1]: Reloading. Feb 1 02:54:05 localhost systemd-rc-local-generator[54465]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:54:05 localhost systemd-sysv-generator[54468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:54:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:54:05 localhost systemd[1]: Reloading. Feb 1 02:54:05 localhost systemd-rc-local-generator[54504]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:54:05 localhost systemd-sysv-generator[54507]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:54:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:54:06 localhost systemd[1]: Starting Create netns directory... Feb 1 02:54:06 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 02:54:06 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 02:54:06 localhost systemd[1]: Finished Create netns directory. Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: b8acc88e7150a91ea5eddde509e925f2 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: d31718fcd17fdeee6489534105191c7a Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: 848fbaed99314033c0982eb0cffd8af7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 52a7bad153b9a3530edb4c6869c1fe7c Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 63e53a2f3cd2422147592f2c2c6c2f61 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 63e53a2f3cd2422147592f2c2c6c2f61 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 08ca8fb8877681656a098784127ead43 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: 848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:06 localhost python3[54534]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 1296029e90a465a2201c8dc6f8be17e7 Feb 1 02:54:07 localhost python3[54593]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 1 02:54:08 localhost podman[54632]: 2026-02-01 07:54:08.205938049 +0000 UTC m=+0.074491652 container create b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr_init_logs, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 1 02:54:08 localhost systemd[1]: Started libpod-conmon-b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583.scope. Feb 1 02:54:08 localhost podman[54632]: 2026-02-01 07:54:08.163695731 +0000 UTC m=+0.032249374 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 1 02:54:08 localhost systemd[1]: Started libcrun container. Feb 1 02:54:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac18d148f1ccb0eaa519a008e32625aabf00d458250cb02e5015187c1942ecc7/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Feb 1 02:54:08 localhost podman[54632]: 2026-02-01 07:54:08.284336895 +0000 UTC m=+0.152890479 container init b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.5, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr_init_logs, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 02:54:08 localhost podman[54632]: 2026-02-01 07:54:08.293402358 +0000 UTC m=+0.161955941 container start b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=metrics_qdr_init_logs, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 02:54:08 localhost podman[54632]: 2026-02-01 07:54:08.293543803 +0000 UTC m=+0.162097386 container attach b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, container_name=metrics_qdr_init_logs, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1) Feb 1 02:54:08 localhost systemd[1]: libpod-b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583.scope: Deactivated successfully. Feb 1 02:54:08 localhost podman[54632]: 2026-02-01 07:54:08.304566874 +0000 UTC m=+0.173120477 container died b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, release=1766032510, config_id=tripleo_step1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container) Feb 1 02:54:08 localhost podman[54653]: 2026-02-01 07:54:08.387412904 +0000 UTC m=+0.067633906 container cleanup b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, container_name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible) Feb 1 02:54:08 localhost systemd[1]: libpod-conmon-b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583.scope: Deactivated successfully. Feb 1 02:54:08 localhost python3[54593]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Feb 1 02:54:08 localhost podman[54727]: 2026-02-01 07:54:08.870763919 +0000 UTC m=+0.089395646 container create 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, config_id=tripleo_step1, version=17.1.13, managed_by=tripleo_ansible) Feb 1 02:54:08 localhost systemd[1]: Started libpod-conmon-75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.scope. Feb 1 02:54:08 localhost systemd[1]: Started libcrun container. Feb 1 02:54:08 localhost podman[54727]: 2026-02-01 07:54:08.827162705 +0000 UTC m=+0.045794472 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 1 02:54:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f747231ffc56e15c128dac75ec633f161eee676530b28d17cb7b8d0be7728054/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Feb 1 02:54:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f747231ffc56e15c128dac75ec633f161eee676530b28d17cb7b8d0be7728054/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Feb 1 02:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:54:08 localhost podman[54727]: 2026-02-01 07:54:08.961808853 +0000 UTC m=+0.180440620 container init 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.expose-services=, container_name=metrics_qdr, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 02:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:54:08 localhost podman[54727]: 2026-02-01 07:54:08.989774818 +0000 UTC m=+0.208406535 container start 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container) Feb 1 02:54:08 localhost python3[54593]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b8acc88e7150a91ea5eddde509e925f2 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 1 02:54:09 localhost podman[54749]: 2026-02-01 07:54:09.096087748 +0000 UTC m=+0.096809153 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, version=17.1.13, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z) Feb 1 02:54:09 localhost systemd[1]: var-lib-containers-storage-overlay-ac18d148f1ccb0eaa519a008e32625aabf00d458250cb02e5015187c1942ecc7-merged.mount: Deactivated successfully. Feb 1 02:54:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b37b394a6b121871a6e7de0a6a690655cf4345f3120333fa8002d7236b9b6583-userdata-shm.mount: Deactivated successfully. Feb 1 02:54:09 localhost podman[54749]: 2026-02-01 07:54:09.314808148 +0000 UTC m=+0.315529543 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z) Feb 1 02:54:09 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:54:09 localhost python3[54820]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:54:09 localhost python3[54836]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 02:54:10 localhost python3[54897]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932449.9111855-84498-102374918916503/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:54:10 localhost python3[54913]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 02:54:10 localhost systemd[1]: Reloading. Feb 1 02:54:10 localhost systemd-rc-local-generator[54938]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:54:10 localhost systemd-sysv-generator[54941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:54:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:54:11 localhost python3[54965]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 02:54:11 localhost systemd[1]: Reloading. Feb 1 02:54:11 localhost systemd-rc-local-generator[54991]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 02:54:11 localhost systemd-sysv-generator[54994]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 02:54:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 02:54:11 localhost systemd[1]: Starting metrics_qdr container... Feb 1 02:54:12 localhost systemd[1]: Started metrics_qdr container. Feb 1 02:54:12 localhost python3[55045]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:54:13 localhost python3[55166]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005604215 step=1 update_config_hash_only=False Feb 1 02:54:14 localhost python3[55182]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:54:14 localhost python3[55198]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 1 02:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:54:39 localhost systemd[1]: tmp-crun.MkuwSG.mount: Deactivated successfully. Feb 1 02:54:39 localhost podman[55199]: 2026-02-01 07:54:39.870384519 +0000 UTC m=+0.085370166 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, container_name=metrics_qdr, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 02:54:40 localhost podman[55199]: 2026-02-01 07:54:40.028732795 +0000 UTC m=+0.243718362 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 02:54:40 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:55:10 localhost systemd[1]: tmp-crun.oV8ac2.mount: Deactivated successfully. Feb 1 02:55:10 localhost podman[55305]: 2026-02-01 07:55:10.863717331 +0000 UTC m=+0.081324264 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 02:55:11 localhost podman[55305]: 2026-02-01 07:55:11.079070023 +0000 UTC m=+0.296676956 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, config_id=tripleo_step1, maintainer=OpenStack TripleO Team) Feb 1 02:55:11 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:55:41 localhost podman[55334]: 2026-02-01 07:55:41.862357457 +0000 UTC m=+0.077890232 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, config_id=tripleo_step1) Feb 1 02:55:42 localhost podman[55334]: 2026-02-01 07:55:42.080728408 +0000 UTC m=+0.296261163 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, version=17.1.13, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z) Feb 1 02:55:42 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:56:12 localhost podman[55441]: 2026-02-01 07:56:12.863074921 +0000 UTC m=+0.077756127 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 02:56:13 localhost podman[55441]: 2026-02-01 07:56:13.077230564 +0000 UTC m=+0.291911740 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 1 02:56:13 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:56:43 localhost podman[55470]: 2026-02-01 07:56:43.870803784 +0000 UTC m=+0.085293088 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, architecture=x86_64, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 02:56:44 localhost podman[55470]: 2026-02-01 07:56:44.075907171 +0000 UTC m=+0.290396435 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, release=1766032510, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 02:56:44 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:57:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:57:14 localhost podman[55575]: 2026-02-01 07:57:14.853157832 +0000 UTC m=+0.068043751 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5) Feb 1 02:57:15 localhost podman[55575]: 2026-02-01 07:57:15.028471261 +0000 UTC m=+0.243357160 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, io.openshift.expose-services=, version=17.1.13, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 02:57:15 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:57:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:57:45 localhost systemd[1]: tmp-crun.mMD4YQ.mount: Deactivated successfully. Feb 1 02:57:45 localhost podman[55605]: 2026-02-01 07:57:45.861194997 +0000 UTC m=+0.080154276 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13) Feb 1 02:57:46 localhost podman[55605]: 2026-02-01 07:57:46.076150972 +0000 UTC m=+0.295110251 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=metrics_qdr, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Feb 1 02:57:46 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:58:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:58:16 localhost systemd[1]: tmp-crun.zFcmCU.mount: Deactivated successfully. Feb 1 02:58:16 localhost podman[55712]: 2026-02-01 07:58:16.895539929 +0000 UTC m=+0.110887424 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 02:58:17 localhost podman[55712]: 2026-02-01 07:58:17.108855483 +0000 UTC m=+0.324202958 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_id=tripleo_step1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 02:58:17 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:58:39 localhost ceph-osd[31357]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2,1,3] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:58:40 localhost ceph-osd[31357]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2,1,3] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:58:41 localhost ceph-osd[31357]: osd.2 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1,2,0] r=1 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:58:43 localhost ceph-osd[32318]: osd.5 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [3,5,1] r=1 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:58:45 localhost ceph-osd[31357]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [4,3,2] r=2 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:58:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:58:47 localhost systemd[1]: tmp-crun.utZO7s.mount: Deactivated successfully. Feb 1 02:58:47 localhost podman[55739]: 2026-02-01 07:58:47.865255547 +0000 UTC m=+0.087092091 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, release=1766032510, io.openshift.expose-services=) Feb 1 02:58:48 localhost podman[55739]: 2026-02-01 07:58:48.058689465 +0000 UTC m=+0.280525999 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, version=17.1.13, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, release=1766032510, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Feb 1 02:58:48 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:59:01 localhost ceph-osd[32318]: osd.5 pg_epoch: 30 pg[6.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [0,5,1] r=1 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:02 localhost ceph-osd[32318]: osd.5 pg_epoch: 32 pg[7.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [5,1,3] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:04 localhost ceph-osd[32318]: osd.5 pg_epoch: 33 pg[7.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [5,1,3] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:59:18 localhost podman[55891]: 2026-02-01 07:59:18.866761368 +0000 UTC m=+0.081706842 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 02:59:19 localhost podman[55891]: 2026-02-01 07:59:19.056803439 +0000 UTC m=+0.271748913 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 02:59:19 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:59:19 localhost ceph-osd[31357]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=36 pruub=9.109359741s) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active pruub 1122.711425781s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,3], acting [2,1,3] -> [2,1,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:19 localhost ceph-osd[31357]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=36 pruub=9.109359741s) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.711425781s@ mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1f( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1b( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1c( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1d( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1a( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1e( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.8( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.9( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.6( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.5( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.2( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.4( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.7( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.3( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.a( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.b( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.d( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.e( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.c( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.f( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.12( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.11( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.13( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.10( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.14( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.15( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.16( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.17( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.18( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.19( empty local-lis/les=19/20 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=36/37 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.15( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.14( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.16( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.13( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.17( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.12( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.18( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.e( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.d( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.11( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.b( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.3( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.c( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.10( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.7( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.a( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.2( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.19( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.9( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.6( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1a( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1d( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.8( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1f( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1e( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1c( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.4( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.f( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.1b( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:20 localhost ceph-osd[31357]: osd.2 pg_epoch: 37 pg[2.5( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=19/19 les/c/f=20/20/0 sis=36) [2,1,3] r=0 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:21 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.0 deep-scrub starts Feb 1 02:59:21 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.0 deep-scrub ok Feb 1 02:59:21 localhost ceph-osd[31357]: osd.2 pg_epoch: 38 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38 pruub=8.304828644s) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active pruub 1123.930908203s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,0], acting [1,2,0] -> [1,2,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:21 localhost ceph-osd[31357]: osd.2 pg_epoch: 38 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38 pruub=8.302339554s) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.930908203s@ mbc={}] state: transitioning to Stray Feb 1 02:59:21 localhost ceph-osd[32318]: osd.5 pg_epoch: 38 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=38 pruub=10.063297272s) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 active pruub 1121.191040039s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,1], acting [3,5,1] -> [3,5,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:21 localhost ceph-osd[32318]: osd.5 pg_epoch: 38 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=38 pruub=10.060767174s) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1121.191040039s@ mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.16( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.18( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.17( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.15( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.14( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.13( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.11( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.10( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.f( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.d( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.c( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.b( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.e( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.2( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.3( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.19( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.1( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.4( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.9( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.1a( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.5( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.a( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.6( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.7( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.8( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.1b( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.1e( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.1f( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.1c( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.12( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[32318]: osd.5 pg_epoch: 39 pg[4.1d( empty local-lis/les=22/23 n=0 ec=38/22 lis/c=22/22 les/c/f=23/23/0 sis=38) [3,5,1] r=1 lpr=38 pi=[22,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.1c( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.1a( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.1d( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.1f( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.1e( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.1b( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.8( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.9( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.7( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.5( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.3( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.6( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.4( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.1( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.b( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.a( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.d( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.c( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.f( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.e( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.10( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.11( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.13( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.12( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.15( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.17( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.14( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.16( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.19( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.2( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:22 localhost ceph-osd[31357]: osd.2 pg_epoch: 39 pg[3.18( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1,2,0] r=1 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:23 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.15 scrub starts Feb 1 02:59:23 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.15 scrub ok Feb 1 02:59:23 localhost ceph-osd[31357]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=10.047298431s) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active pruub 1127.717285156s@ mbc={}] start_peering_interval up [4,3,2] -> [4,3,2], acting [4,3,2] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:23 localhost ceph-osd[31357]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=10.044237137s) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.717285156s@ mbc={}] state: transitioning to Stray Feb 1 02:59:23 localhost ceph-osd[32318]: osd.5 pg_epoch: 40 pg[6.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=10.134173393s) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 1123.288208008s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,1], acting [0,5,1] -> [0,5,1], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:23 localhost ceph-osd[32318]: osd.5 pg_epoch: 40 pg[6.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=10.130561829s) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.288208008s@ mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.14 scrub starts Feb 1 02:59:24 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.14 scrub ok Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.15( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.1a( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.14( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.16( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.12( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.17( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.10( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.11( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.13( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.d( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.c( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.f( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.e( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.1( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.2( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.9( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.6( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.3( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.1b( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.18( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.b( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.7( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.4( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.a( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.19( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.8( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.5( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.1e( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.1d( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.1c( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[32318]: osd.5 pg_epoch: 41 pg[6.1f( empty local-lis/les=30/31 n=0 ec=40/30 lis/c=30/30 les/c/f=31/31/0 sis=40) [0,5,1] r=1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:24 localhost ceph-osd[31357]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [4,3,2] r=2 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:25 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.17 scrub starts Feb 1 02:59:25 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.17 scrub ok Feb 1 02:59:25 localhost ceph-osd[32318]: osd.5 pg_epoch: 42 pg[7.0( v 34'39 (0'0,34'39] local-lis/les=32/33 n=22 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=42 pruub=10.773263931s) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 34'38 mlcod 34'38 active pruub 1125.998168945s@ mbc={}] start_peering_interval up [5,1,3] -> [5,1,3], acting [5,1,3] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:25 localhost ceph-osd[32318]: osd.5 pg_epoch: 42 pg[7.0( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=42 pruub=10.773263931s) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 34'38 mlcod 0'0 unknown pruub 1125.998168945s@ mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.18 scrub starts Feb 1 02:59:26 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.18 scrub ok Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.5( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.9( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.4( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.6( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.a( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.7( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.2( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.1( v 34'39 (0'0,34'39] local-lis/les=32/33 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.8( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.f( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.e( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.d( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.c( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=32/33 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.1( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.0( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 34'38 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.7( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.3( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.4( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.d( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.2( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.e( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.a( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.b( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.9( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.c( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.8( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.5( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: osd.5 pg_epoch: 43 pg[7.6( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [5,1,3] r=0 lpr=42 pi=[32,42)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:26 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 7.0 scrub starts Feb 1 02:59:27 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 7.0 scrub ok Feb 1 02:59:31 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 7.1 scrub starts Feb 1 02:59:31 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 7.1 scrub ok Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.671681404s) [3,2,4] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.646362305s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.748115540s) [0,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.722778320s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,4], acting [4,3,2] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.747766495s) [2,4,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.722412109s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,3], acting [4,3,2] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.706328392s) [3,1,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.680908203s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,5], acting [1,2,0] -> [3,1,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.706067085s) [0,2,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.680786133s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,1], acting [1,2,0] -> [0,2,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.706243515s) [3,1,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.680908203s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.747766495s) [2,4,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1142.722412109s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.747913361s) [0,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.722778320s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.706022263s) [0,2,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.680786133s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.671585083s) [3,2,4] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.646362305s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.669023514s) [4,2,3] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.644409180s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,3], acting [2,1,3] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.668961525s) [4,2,3] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.644409180s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.668678284s) [5,1,3] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.644042969s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.747649193s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.723022461s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.697118759s) [2,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.672607422s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,4], acting [1,2,0] -> [2,3,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.747649193s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1142.723022461s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.668617249s) [5,1,3] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.644042969s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.697118759s) [2,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1140.672607422s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.747053146s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.722656250s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.747053146s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1142.722656250s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667881012s) [5,1,0] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.643554688s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,0], acting [2,1,3] -> [5,1,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.697459221s) [0,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.673217773s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.747257233s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.723022461s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.697418213s) [0,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.673217773s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.747172356s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.723022461s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.664317131s) [1,0,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.640258789s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.704445839s) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.680419922s@ mbc={}] start_peering_interval up [1,2,0] -> [2,4,0], acting [1,2,0] -> [2,4,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.664267540s) [1,0,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.640258789s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667556763s) [5,1,0] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.643554688s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.704445839s) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1140.680419922s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745916367s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.722045898s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745870590s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.722045898s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667304039s) [2,4,0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.643554688s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,0], acting [2,1,3] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745551109s) [3,4,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.721801758s@ mbc={}] start_peering_interval up [4,3,2] -> [3,4,5], acting [4,3,2] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745512009s) [3,4,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.721801758s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667304039s) [2,4,0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.643554688s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667317390s) [1,5,3] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.643554688s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.695111275s) [0,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.671508789s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667275429s) [1,5,3] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.643554688s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745864868s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.722290039s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.695067406s) [0,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.671508789s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.695174217s) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.671630859s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,1], acting [1,2,0] -> [2,3,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745836258s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.722290039s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745570183s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.722167969s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.695174217s) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1140.671630859s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667712212s) [4,3,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.644165039s@ mbc={}] start_peering_interval up [2,1,3] -> [4,3,2], acting [2,1,3] -> [4,3,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745529175s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.722167969s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.668106079s) [5,3,4] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.644775391s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,4], acting [2,1,3] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667649269s) [4,3,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.644165039s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.694184303s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.670898438s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.668056488s) [5,3,4] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.644775391s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.694154739s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.670898438s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.746485710s) [3,1,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.723266602s@ mbc={}] start_peering_interval up [4,3,2] -> [3,1,5], acting [4,3,2] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.746417046s) [3,1,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.723266602s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.694130898s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.671142578s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,0], acting [1,2,0] -> [4,5,0], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.694103241s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.671142578s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.744627953s) [1,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.721557617s@ mbc={}] start_peering_interval up [4,3,2] -> [1,0,5], acting [4,3,2] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.668325424s) [4,0,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.645385742s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,5], acting [2,1,3] -> [4,0,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.693974495s) [1,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.671020508s@ mbc={}] start_peering_interval up [1,2,0] -> [1,5,0], acting [1,2,0] -> [1,5,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.672636986s) [4,2,0] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.649780273s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.744574547s) [1,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.721557617s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.693933487s) [1,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.671020508s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.672598839s) [4,2,0] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.649780273s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.668267250s) [4,0,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.645385742s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743574142s) [5,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.720825195s@ mbc={}] start_peering_interval up [4,3,2] -> [5,4,0], acting [4,3,2] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743518829s) [5,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.720825195s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743288040s) [0,1,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.720581055s@ mbc={}] start_peering_interval up [4,3,2] -> [0,1,2], acting [4,3,2] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.693218231s) [2,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.670410156s@ mbc={}] start_peering_interval up [1,2,0] -> [2,1,0], acting [1,2,0] -> [2,1,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743253708s) [0,1,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.720581055s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667409897s) [1,5,3] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.644775391s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.693218231s) [2,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1140.670410156s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667363167s) [1,5,3] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.644775391s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.695250511s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.672729492s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743683815s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.721191406s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666829109s) [3,4,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.644409180s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,2], acting [2,1,3] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743658066s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.721191406s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.695183754s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.672729492s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666769981s) [3,4,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.644409180s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.703211784s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.680908203s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745508194s) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.723144531s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.745478630s) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.723144531s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.703177452s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.680908203s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667142868s) [1,5,0] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.644897461s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,0], acting [2,1,3] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667075157s) [1,5,0] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.644897461s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.702480316s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.680419922s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.702448845s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.680419922s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743301392s) [4,5,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.721313477s@ mbc={}] start_peering_interval up [4,3,2] -> [4,5,0], acting [4,3,2] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667696953s) [1,3,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.645629883s@ mbc={}] start_peering_interval up [2,1,3] -> [1,3,2], acting [2,1,3] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743267059s) [4,5,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.721313477s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667664528s) [1,3,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.645629883s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.702930450s) [3,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.681030273s@ mbc={}] start_peering_interval up [1,2,0] -> [3,5,1], acting [1,2,0] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.702900887s) [3,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.681030273s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743289948s) [1,3,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.721435547s@ mbc={}] start_peering_interval up [4,3,2] -> [1,3,5], acting [4,3,2] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.743255615s) [1,3,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.721435547s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.702823639s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.681152344s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666821480s) [5,3,1] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.645263672s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,1], acting [2,1,3] -> [5,3,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.702794075s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.681152344s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.741458893s) [5,3,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.719848633s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,4], acting [4,3,2] -> [5,3,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.741419792s) [5,3,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.719848633s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666716576s) [5,3,1] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.645263672s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.692336082s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.670898438s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.692304611s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.670898438s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.741690636s) [3,5,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.720336914s@ mbc={}] start_peering_interval up [4,3,2] -> [3,5,4], acting [4,3,2] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.741659164s) [3,5,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.720336914s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667568207s) [3,4,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.646362305s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,5], acting [2,1,3] -> [3,4,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666659355s) [5,1,3] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.645507812s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.667483330s) [3,4,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.646362305s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666622162s) [5,1,3] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.645507812s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.692347527s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.671264648s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.692308426s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.671264648s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666785240s) [5,0,4] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.645874023s@ mbc={}] start_peering_interval up [2,1,3] -> [5,0,4], acting [2,1,3] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.741279602s) [0,2,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.720458984s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,1], acting [4,3,2] -> [0,2,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.691709518s) [2,0,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.670898438s@ mbc={}] start_peering_interval up [1,2,0] -> [2,0,4], acting [1,2,0] -> [2,0,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666743279s) [5,0,4] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.645874023s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.741243362s) [0,2,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.720458984s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.691709518s) [2,0,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1140.670898438s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.740839958s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.720092773s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.740790367s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.720092773s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.668323517s) [3,1,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.647583008s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,2], acting [2,1,3] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.691550255s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.670898438s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,4], acting [1,2,0] -> [5,3,4], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.668286324s) [3,1,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.647583008s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.740085602s) [5,0,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.719482422s@ mbc={}] start_peering_interval up [4,3,2] -> [5,0,1], acting [4,3,2] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.691515923s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.670898438s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.740049362s) [5,0,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.719482422s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.670497894s) [1,0,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.650024414s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.670463562s) [1,0,2] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.650024414s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.693242073s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.672851562s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.740189552s) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.719726562s@ mbc={}] start_peering_interval up [4,3,2] -> [2,3,1], acting [4,3,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.693207741s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.672851562s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666841507s) [3,1,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.646484375s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.740189552s) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1142.719726562s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666806221s) [3,1,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.646484375s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.692721367s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.672485352s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.739383698s) [5,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.719360352s@ mbc={}] start_peering_interval up [4,3,2] -> [5,1,3], acting [4,3,2] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666863441s) [2,1,0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.646850586s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,0], acting [2,1,3] -> [2,1,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.693044662s) [4,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.673095703s@ mbc={}] start_peering_interval up [1,2,0] -> [4,2,3], acting [1,2,0] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.739315987s) [5,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.719360352s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666863441s) [2,1,0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.646850586s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.738803864s) [4,0,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.718994141s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,2], acting [4,3,2] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.738779068s) [4,0,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.718994141s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666206360s) [3,5,4] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.646484375s@ mbc={}] start_peering_interval up [2,1,3] -> [3,5,4], acting [2,1,3] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666164398s) [3,5,4] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.646484375s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.693004608s) [4,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.673095703s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.692625999s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.672485352s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.692666054s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.673095703s@ mbc={}] start_peering_interval up [1,2,0] -> [4,0,5], acting [1,2,0] -> [4,0,5], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.738787651s) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.719238281s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.692623138s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.673095703s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.738755226s) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.719238281s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.689872742s) [4,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.670410156s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,3], acting [1,2,0] -> [4,5,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.689837456s) [4,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.670410156s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.665906906s) [2,4,3] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.646606445s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,3], acting [2,1,3] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.739544868s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.720214844s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.689888000s) [4,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.670654297s@ mbc={}] start_peering_interval up [1,2,0] -> [4,3,2], acting [1,2,0] -> [4,3,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.665906906s) [2,4,3] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.646606445s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.669133186s) [1,2,3] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.649902344s@ mbc={}] start_peering_interval up [2,1,3] -> [1,2,3], acting [2,1,3] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.739544868s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1142.720214844s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.669072151s) [1,2,3] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.649902344s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.738558769s) [2,0,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.719482422s@ mbc={}] start_peering_interval up [4,3,2] -> [2,0,4], acting [4,3,2] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.700650215s) [1,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.681762695s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,3], acting [1,2,0] -> [1,2,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666586876s) [4,2,0] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.647583008s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.738558769s) [2,0,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1142.719482422s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.700602531s) [1,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.681762695s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.666532516s) [4,2,0] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.647583008s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.737504959s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.718872070s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.699731827s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.681030273s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.737471581s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.718872070s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.665313721s) [4,5,0] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.646728516s@ mbc={}] start_peering_interval up [2,1,3] -> [4,5,0], acting [2,1,3] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.699681282s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.681030273s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.665266991s) [4,5,0] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.646728516s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.689131737s) [4,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.670654297s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.737303734s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.718872070s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.737273216s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1142.718872070s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.737103462s) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1142.718750000s@ mbc={}] start_peering_interval up [4,3,2] -> [2,1,3], acting [4,3,2] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.700087547s) [0,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.681762695s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,4], acting [1,2,0] -> [0,5,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.737103462s) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1142.718750000s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.700148582s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1140.681884766s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.700118065s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.681884766s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.700004578s) [0,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1140.681762695s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.664992332s) [0,2,4] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.646972656s@ mbc={}] start_peering_interval up [2,1,3] -> [0,2,4], acting [2,1,3] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.664954185s) [0,2,4] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.646972656s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.663228035s) [1,0,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.645385742s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,5], acting [2,1,3] -> [1,0,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.663178444s) [1,0,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.645385742s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.664872169s) [3,1,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.647216797s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=36/37 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44 pruub=11.664809227s) [3,1,5] r=-1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.647216797s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.19( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.733100891s) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.197143555s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,1], acting [0,5,1] -> [5,3,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.19( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.733100891s) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.197143555s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1b( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.684711456s) [2,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.149291992s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1b( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.684578896s) [2,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.149291992s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.1c( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.9( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.15( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.d( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,3,1] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,0,4] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.19( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.677124977s) [1,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144897461s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,1,3] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[7.7( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1b( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.728588104s) [1,2,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.197143555s@ mbc={}] start_peering_interval up [0,5,1] -> [1,2,0], acting [0,5,1] -> [1,2,0], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.19( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.676439285s) [1,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144897461s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1b( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.728481293s) [1,2,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.197143555s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,0,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[7.d( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,1,0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.17( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,1,3] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,3,4] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.10( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.18( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.669281960s) [2,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.143798828s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1a( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.729084969s) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.203613281s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1a( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.729084969s) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.203613281s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.18( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.669144630s) [2,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.143798828s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.17( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665369034s) [3,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.140380859s@ mbc={}] start_peering_interval up [3,5,1] -> [3,2,4], acting [3,5,1] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.15( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.721194267s) [2,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196044922s@ mbc={}] start_peering_interval up [0,5,1] -> [2,4,0], acting [0,5,1] -> [2,4,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.15( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.721135139s) [2,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196044922s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.16( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665118217s) [0,2,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.140258789s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,1], acting [3,5,1] -> [0,2,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.16( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665058136s) [0,2,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.140258789s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.15( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.668439865s) [4,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.143798828s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.14( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.720371246s) [3,5,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.195922852s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,4], acting [0,5,1] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.15( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.668400764s) [4,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.143798828s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.17( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665307999s) [3,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.140380859s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.14( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.667908669s) [4,0,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.143798828s@ mbc={}] start_peering_interval up [3,5,1] -> [4,0,5], acting [3,5,1] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.14( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.720163345s) [3,5,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.195922852s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.14( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.667826653s) [4,0,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.143798828s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.16( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.720002174s) [0,5,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.195922852s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,4], acting [0,5,1] -> [0,5,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.17( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.720026016s) [1,0,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.195800781s@ mbc={}] start_peering_interval up [0,5,1] -> [1,0,2], acting [0,5,1] -> [1,0,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.16( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.719942093s) [0,5,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.195922852s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1c( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.672893524s) [1,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.149291992s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.17( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.719747543s) [1,0,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.195800781s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1c( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.672736168s) [1,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.149291992s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.11( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.719455719s) [3,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196044922s@ mbc={}] start_peering_interval up [0,5,1] -> [3,1,2], acting [0,5,1] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.11( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.719401360s) [3,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196044922s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.13( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.668066978s) [2,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144775391s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.12( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.672833443s) [0,1,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.149658203s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.12( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.672800064s) [0,1,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.149658203s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.13( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.668010712s) [2,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144775391s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.10( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.719079018s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.195922852s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,2], acting [0,5,1] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.10( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.718941689s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.195922852s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.11( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.667149544s) [3,5,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144287109s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,4], acting [3,5,1] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.11( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.667105675s) [3,5,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144287109s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.13( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.719029427s) [3,4,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196289062s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,2], acting [0,5,1] -> [3,4,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.13( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.718981743s) [3,4,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196289062s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.12( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.718306541s) [4,2,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.195800781s@ mbc={}] start_peering_interval up [0,5,1] -> [4,2,0], acting [0,5,1] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.12( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.718264580s) [4,2,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.195800781s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.f( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.667136192s) [3,4,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144775391s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,5], acting [3,5,1] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.d( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.718710899s) [2,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196533203s@ mbc={}] start_peering_interval up [0,5,1] -> [2,3,1], acting [0,5,1] -> [2,3,1], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.f( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.667104721s) [3,4,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144775391s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.d( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.718682289s) [2,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196533203s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.e( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.666660309s) [2,4,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144897461s@ mbc={}] start_peering_interval up [3,5,1] -> [2,4,0], acting [3,5,1] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.10( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.666120529s) [3,4,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144531250s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,2], acting [3,5,1] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.e( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.666480064s) [2,4,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144897461s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.c( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.718157768s) [3,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196899414s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,4], acting [0,5,1] -> [3,2,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.d( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665976524s) [2,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144775391s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.d( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.789257050s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1132.268066406s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.d( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665924072s) [2,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144775391s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.c( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.718091965s) [3,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196899414s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,0,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.f( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.717349052s) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196289062s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.788927078s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1132.268066406s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.788862228s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.268066406s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.f( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.717156410s) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196289062s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.10( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665730476s) [3,4,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144531250s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.e( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.717579842s) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.197143555s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.e( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.717579842s) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.197143555s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.d( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.788775444s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.268066406s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.b( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665179253s) [0,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144897461s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,4], acting [3,5,1] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.9( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.717197418s) [0,1,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.197143555s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,5], acting [0,5,1] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.b( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665058136s) [0,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144897461s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[7.b( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.9( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.717163086s) [0,1,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.197143555s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.c( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665095329s) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144653320s@ mbc={}] start_peering_interval up [3,5,1] -> [5,3,1], acting [3,5,1] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.c( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665095329s) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.144653320s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.3( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.786544800s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1132.267822266s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.2( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.715110779s) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196533203s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.3( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.786461830s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.267822266s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.2( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.715110779s) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.196533203s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.3( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.663132668s) [1,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144775391s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.714653015s) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196533203s@ mbc={}] start_peering_interval up [0,5,1] -> [1,5,3], acting [0,5,1] -> [1,5,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.1( v 34'39 (0'0,34'39] local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.785576820s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1132.267333984s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.3( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.663085938s) [1,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144775391s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.1( v 34'39 (0'0,34'39] local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.785248756s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.267333984s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662741661s) [4,2,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144897461s@ mbc={}] start_peering_interval up [3,5,1] -> [4,2,0], acting [3,5,1] -> [4,2,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.3( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.714677811s) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.197021484s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.714594841s) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196533203s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.3( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.714677811s) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.197021484s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.4( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.664740562s) [0,1,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.147338867s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.2( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662286758s) [1,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.144897461s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.4( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.664683342s) [0,1,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.147338867s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.2( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662236214s) [1,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144897461s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.9( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.664308548s) [1,0,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.147338867s@ mbc={}] start_peering_interval up [3,5,1] -> [1,0,2], acting [3,5,1] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.6( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.713985443s) [3,5,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.197021484s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,1], acting [0,5,1] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.6( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.713943481s) [3,5,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.197021484s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.b( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.713064194s) [3,2,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196166992s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.9( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.664274216s) [1,0,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.147338867s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.b( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.713011742s) [3,2,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196166992s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.18( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.713712692s) [0,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.197021484s@ mbc={}] start_peering_interval up [0,5,1] -> [0,2,4], acting [0,5,1] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1a( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665072441s) [2,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.148437500s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,4], acting [3,5,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.18( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.713678360s) [0,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.197021484s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1a( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.665037155s) [2,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.148437500s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.7( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.784646988s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1132.267822266s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.7( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.712370872s) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196777344s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.5( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662937164s) [5,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.147460938s@ mbc={}] start_peering_interval up [3,5,1] -> [5,1,0], acting [3,5,1] -> [5,1,0], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.9( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.786987305s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1132.271484375s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.7( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.784045219s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.267822266s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.5( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662937164s) [5,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.147460938s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.7( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.712370872s) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.196777344s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.9( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.786813736s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.271484375s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.a( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662644386s) [2,0,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.147827148s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,1], acting [3,5,1] -> [2,0,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.a( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662581444s) [2,0,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.147827148s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.5( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.786248207s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1132.271728516s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.8( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.710806847s) [2,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196166992s@ mbc={}] start_peering_interval up [0,5,1] -> [2,1,3], acting [0,5,1] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.4( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.710494995s) [3,2,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196289062s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.5( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.785942078s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.271728516s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.4( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.710417747s) [3,2,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196289062s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662599564s) [4,2,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.144897461s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.8( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.710174561s) [2,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196166992s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.5( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.710013390s) [5,1,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196533203s@ mbc={}] start_peering_interval up [0,5,1] -> [5,1,0], acting [0,5,1] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.6( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662632942s) [4,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.149291992s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.5( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.710013390s) [5,1,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.196533203s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.7( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662511826s) [0,1,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.149047852s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.7( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662325859s) [0,1,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.149047852s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.6( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.662568092s) [4,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.149291992s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.a( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.709159851s) [5,0,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196289062s@ mbc={}] start_peering_interval up [0,5,1] -> [5,0,4], acting [0,5,1] -> [5,0,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1f( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.709859848s) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.197143555s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1f( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.709807396s) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.197143555s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.a( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.709159851s) [5,0,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1138.196289062s@ mbc={}] state: transitioning to Primary Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.b( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.784202576s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1132.271484375s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.8( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.661916733s) [1,2,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.149536133s@ mbc={}] start_peering_interval up [3,5,1] -> [1,2,3], acting [3,5,1] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1e( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.709301949s) [4,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.197021484s@ mbc={}] start_peering_interval up [0,5,1] -> [4,5,3], acting [0,5,1] -> [4,5,3], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.8( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.661812782s) [1,2,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.149536133s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1d( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.661817551s) [4,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.149658203s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1d( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.661761284s) [4,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.149658203s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1c( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.707850456s) [1,3,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.195800781s@ mbc={}] start_peering_interval up [0,5,1] -> [1,3,5], acting [0,5,1] -> [1,3,5], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1c( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.707735062s) [1,3,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.195800781s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1e( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.661364555s) [0,5,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.149658203s@ mbc={}] start_peering_interval up [3,5,1] -> [0,5,1], acting [3,5,1] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1e( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.661241531s) [0,5,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.149658203s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1e( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.709239006s) [4,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.197021484s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1d( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.707995415s) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1138.196899414s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[7.b( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.784082413s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.271484375s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1f( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.660345078s) [4,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.149414062s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[6.1d( empty local-lis/les=40/41 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=15.707566261s) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1138.196899414s@ mbc={}] state: transitioning to Stray Feb 1 02:59:32 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[4.1f( empty local-lis/les=38/39 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=13.660030365s) [4,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.149414062s@ mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.15( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.12( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,2,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,2,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.6( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [4,5,0] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,0,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.19( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.b( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,0,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.1b( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,2,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.13( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,0,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,0,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.10( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [4,0,5] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.d( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,5,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.17( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,1,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.17( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,0,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.b( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,5,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [3,5,4] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.2( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.c( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [3,1,5] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.1( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [3,4,5] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.4( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,2,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,2,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.10( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,1,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.6( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,5,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.16( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,2,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.1e( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [3,1,5] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,3,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.18( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,1,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.b( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [1,5,0] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,0,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.11( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,1,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [2,1,0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,5,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.17( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,5,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,0,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,5,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[5.8( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,0,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,5,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 44 pg[6.13( empty local-lis/les=0/0 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,4,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,5,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[6.d( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.c( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [1,0,5] r=2 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[7.1( v 34'39 (0'0,34'39] local-lis/les=44/45 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[2.14( empty local-lis/les=44/45 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [2,4,0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[3.14( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.d( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [1,5,3] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,5,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [1,5,3] r=1 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,3,4] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[7.7( v 34'39 lc 34'21 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[7.d( v 34'39 lc 34'13 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[7.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=44/45 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=34'39 mlcod 0'0 active+degraded m=2 mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[3.10( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,1,3] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[2.1a( empty local-lis/les=44/45 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [2,4,3] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,1,3] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[6.7( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[3.d( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[6.e( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[6.2( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,3,1] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[3.1c( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[4.c( empty local-lis/les=44/45 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[7.5( v 34'39 lc 34'11 (0'0,34'39] local-lis/les=44/45 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[7.f( v 34'39 lc 34'1 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(2+1)=3}}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[6.19( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[7.9( v 34'39 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,0,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[3.13( empty local-lis/les=44/45 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[7.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=34'39 mlcod 0'0 active+degraded m=1 mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[31357]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,0,4] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=36/19 lis/c=36/36 les/c/f=37/37/0 sis=44) [5,1,0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[6.a( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,0,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=38/22 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,0,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[6.5( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,1,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[6.1a( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:33 localhost ceph-osd[32318]: osd.5 pg_epoch: 45 pg[6.3( empty local-lis/les=44/45 n=0 ec=40/30 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 1 02:59:35 localhost ceph-osd[32318]: osd.5 pg_epoch: 46 pg[7.6( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=15.220208168s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1140.271850586s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:35 localhost ceph-osd[32318]: osd.5 pg_epoch: 46 pg[7.a( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=15.219919205s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1140.271606445s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:35 localhost ceph-osd[32318]: osd.5 pg_epoch: 46 pg[7.6( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=15.220129013s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1140.271850586s@ mbc={}] state: transitioning to Stray Feb 1 02:59:35 localhost ceph-osd[32318]: osd.5 pg_epoch: 46 pg[7.a( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=15.219849586s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1140.271606445s@ mbc={}] state: transitioning to Stray Feb 1 02:59:35 localhost ceph-osd[32318]: osd.5 pg_epoch: 46 pg[7.2( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=15.216197968s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1140.268188477s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:35 localhost ceph-osd[32318]: osd.5 pg_epoch: 46 pg[7.2( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=15.216075897s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1140.268188477s@ mbc={}] state: transitioning to Stray Feb 1 02:59:35 localhost ceph-osd[32318]: osd.5 pg_epoch: 46 pg[7.e( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=15.215479851s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1140.268188477s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:35 localhost ceph-osd[32318]: osd.5 pg_epoch: 46 pg[7.e( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=15.215414047s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1140.268188477s@ mbc={}] state: transitioning to Stray Feb 1 02:59:36 localhost python3[55936]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:59:37 localhost ceph-osd[31357]: osd.2 pg_epoch: 48 pg[7.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=44/45 n=2 ec=42/32 lis/c=44/42 les/c/f=45/43/0 sis=48 pruub=12.399884224s) [3,2,4] r=1 lpr=48 pi=[42,48)/2 crt=34'39 mlcod 0'0 active pruub 1144.015136719s@ m=2 mbc={255={(2+1)=2}}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:37 localhost ceph-osd[31357]: osd.2 pg_epoch: 48 pg[7.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=44/45 n=2 ec=42/32 lis/c=44/42 les/c/f=45/43/0 sis=48 pruub=12.399776459s) [3,2,4] r=1 lpr=48 pi=[42,48)/2 crt=34'39 mlcod 0'0 unknown NOTIFY pruub 1144.015136719s@ m=2 mbc={}] state: transitioning to Stray Feb 1 02:59:37 localhost ceph-osd[31357]: osd.2 pg_epoch: 48 pg[7.7( v 34'39 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/44 les/c/f=45/46/0 sis=48 pruub=12.399454117s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=34'39 mlcod 0'0 active pruub 1144.015014648s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:37 localhost ceph-osd[31357]: osd.2 pg_epoch: 48 pg[7.7( v 34'39 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/44 les/c/f=45/46/0 sis=48 pruub=12.399329185s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=34'39 mlcod 0'0 unknown NOTIFY pruub 1144.015014648s@ mbc={}] state: transitioning to Stray Feb 1 02:59:37 localhost ceph-osd[31357]: osd.2 pg_epoch: 48 pg[7.f( v 34'39 lc 34'1 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/42 les/c/f=45/43/0 sis=48 pruub=12.401145935s) [3,2,4] r=1 lpr=48 pi=[42,48)/2 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1144.017211914s@ m=3 mbc={255={(2+1)=3}}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:37 localhost ceph-osd[31357]: osd.2 pg_epoch: 48 pg[7.f( v 34'39 lc 34'1 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/42 les/c/f=45/43/0 sis=48 pruub=12.401041031s) [3,2,4] r=1 lpr=48 pi=[42,48)/2 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1144.017211914s@ m=3 mbc={}] state: transitioning to Stray Feb 1 02:59:37 localhost ceph-osd[31357]: osd.2 pg_epoch: 48 pg[7.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/42 les/c/f=45/43/0 sis=48 pruub=12.402244568s) [3,2,4] r=1 lpr=48 pi=[42,48)/2 crt=34'39 mlcod 0'0 active pruub 1144.018554688s@ m=1 mbc={255={(2+1)=1}}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:37 localhost ceph-osd[31357]: osd.2 pg_epoch: 48 pg[7.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/42 les/c/f=45/43/0 sis=48 pruub=12.402137756s) [3,2,4] r=1 lpr=48 pi=[42,48)/2 crt=34'39 mlcod 0'0 unknown NOTIFY pruub 1144.018554688s@ m=1 mbc={}] state: transitioning to Stray Feb 1 02:59:37 localhost ceph-osd[32318]: osd.5 pg_epoch: 48 pg[7.b( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=48) [3,2,4] r=-1 lpr=48 pi=[42,48)/2 luod=0'0 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:37 localhost ceph-osd[32318]: osd.5 pg_epoch: 48 pg[7.b( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=48) [3,2,4] r=-1 lpr=48 pi=[42,48)/2 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:37 localhost ceph-osd[32318]: osd.5 pg_epoch: 48 pg[7.7( v 34'39 (0'0,34'39] lb MIN local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=48) [3,2,4] r=-1 lpr=48 pi=[42,48)/2 luod=0'0 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:37 localhost ceph-osd[32318]: osd.5 pg_epoch: 48 pg[7.7( v 34'39 (0'0,34'39] lb MIN local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=48) [3,2,4] r=-1 lpr=48 pi=[42,48)/2 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:37 localhost ceph-osd[32318]: osd.5 pg_epoch: 48 pg[7.3( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=48) [3,2,4] r=-1 lpr=48 pi=[42,48)/2 luod=0'0 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:37 localhost ceph-osd[32318]: osd.5 pg_epoch: 48 pg[7.3( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=48) [3,2,4] r=-1 lpr=48 pi=[42,48)/2 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:37 localhost ceph-osd[32318]: osd.5 pg_epoch: 48 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=48) [3,2,4] r=-1 lpr=48 pi=[42,48)/2 luod=0'0 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:37 localhost ceph-osd[32318]: osd.5 pg_epoch: 48 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=48) [3,2,4] r=-1 lpr=48 pi=[42,48)/2 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 1 02:59:37 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 7.4 scrub starts Feb 1 02:59:38 localhost python3[55952]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:59:39 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts Feb 1 02:59:40 localhost python3[55968]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:59:40 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 7.8 scrub starts Feb 1 02:59:41 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.5 scrub starts Feb 1 02:59:41 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.5 scrub ok Feb 1 02:59:42 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 4.c scrub starts Feb 1 02:59:43 localhost ceph-osd[32318]: osd.5 pg_epoch: 50 pg[7.4( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.010475159s) [0,5,4] r=1 lpr=50 pi=[42,50)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1148.268310547s@ mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:43 localhost ceph-osd[32318]: osd.5 pg_epoch: 50 pg[7.4( v 34'39 (0'0,34'39] local-lis/les=42/43 n=2 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.010384560s) [0,5,4] r=1 lpr=50 pi=[42,50)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1148.268310547s@ mbc={}] state: transitioning to Stray Feb 1 02:59:43 localhost ceph-osd[32318]: osd.5 pg_epoch: 50 pg[7.c( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.013626099s) [0,5,4] r=1 lpr=50 pi=[42,50)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1148.271850586s@ TIME_FOR_DEEP mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:43 localhost ceph-osd[32318]: osd.5 pg_epoch: 50 pg[7.c( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.013566017s) [0,5,4] r=1 lpr=50 pi=[42,50)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1148.271850586s@ TIME_FOR_DEEP mbc={}] state: transitioning to Stray Feb 1 02:59:43 localhost python3[56016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:59:43 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.1a scrub starts Feb 1 02:59:44 localhost python3[56059]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932783.5295072-91478-238905355061275/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=814f759dcc97f4b50c85badaa6f3819c2533c70a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:59:44 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.19 deep-scrub starts Feb 1 02:59:45 localhost ceph-osd[31357]: osd.2 pg_epoch: 52 pg[7.d( v 34'39 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/44 les/c/f=45/49/0 sis=52 pruub=12.211205482s) [4,0,2] r=2 lpr=52 pi=[44,52)/1 crt=34'39 mlcod 0'0 active pruub 1152.015258789s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:45 localhost ceph-osd[31357]: osd.2 pg_epoch: 52 pg[7.5( v 34'39 (0'0,34'39] local-lis/les=44/45 n=2 ec=42/32 lis/c=44/44 les/c/f=45/48/0 sis=52 pruub=12.213039398s) [4,0,2] r=2 lpr=52 pi=[44,52)/1 crt=34'39 mlcod 0'0 active pruub 1152.017211914s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:45 localhost ceph-osd[31357]: osd.2 pg_epoch: 52 pg[7.5( v 34'39 (0'0,34'39] local-lis/les=44/45 n=2 ec=42/32 lis/c=44/44 les/c/f=45/48/0 sis=52 pruub=12.212736130s) [4,0,2] r=2 lpr=52 pi=[44,52)/1 crt=34'39 mlcod 0'0 unknown NOTIFY pruub 1152.017211914s@ mbc={}] state: transitioning to Stray Feb 1 02:59:45 localhost ceph-osd[31357]: osd.2 pg_epoch: 52 pg[7.d( v 34'39 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/44 les/c/f=45/49/0 sis=52 pruub=12.210746765s) [4,0,2] r=2 lpr=52 pi=[44,52)/1 crt=34'39 mlcod 0'0 unknown NOTIFY pruub 1152.015258789s@ mbc={}] state: transitioning to Stray Feb 1 02:59:46 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.3 scrub starts Feb 1 02:59:47 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.f scrub starts Feb 1 02:59:47 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.f scrub ok Feb 1 02:59:47 localhost ceph-osd[32318]: osd.5 pg_epoch: 54 pg[7.6( v 34'39 (0'0,34'39] local-lis/les=46/47 n=2 ec=42/32 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=12.748791695s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 luod=0'0 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1150.093139648s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:47 localhost ceph-osd[32318]: osd.5 pg_epoch: 54 pg[7.e( v 34'39 (0'0,34'39] local-lis/les=46/47 n=1 ec=42/32 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=12.748353958s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 luod=0'0 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1150.093139648s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:47 localhost ceph-osd[32318]: osd.5 pg_epoch: 54 pg[7.6( v 34'39 (0'0,34'39] local-lis/les=46/47 n=2 ec=42/32 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=12.748511314s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1150.093139648s@ mbc={}] state: transitioning to Stray Feb 1 02:59:47 localhost ceph-osd[32318]: osd.5 pg_epoch: 54 pg[7.e( v 34'39 (0'0,34'39] local-lis/les=46/47 n=1 ec=42/32 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=12.748250008s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1150.093139648s@ mbc={}] state: transitioning to Stray Feb 1 02:59:48 localhost ceph-osd[31357]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=46/46 les/c/f=47/47/0 sis=54) [0,2,4] r=1 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:48 localhost ceph-osd[31357]: osd.2 pg_epoch: 54 pg[7.6( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=46/46 les/c/f=47/47/0 sis=54) [0,2,4] r=1 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:48 localhost ceph-osd[31357]: osd.2 pg_epoch: 55 pg[7.7( v 34'39 (0'0,34'39] local-lis/les=48/49 n=1 ec=42/32 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=13.773110390s) [2,1,3] r=0 lpr=55 pi=[48,55)/1 luod=0'0 crt=34'39 mlcod 0'0 active pruub 1156.685668945s@ mbc={}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:48 localhost ceph-osd[31357]: osd.2 pg_epoch: 55 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=48/49 n=1 ec=42/32 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=13.772881508s) [2,1,3] r=0 lpr=55 pi=[48,55)/1 luod=0'0 crt=34'39 mlcod 0'0 active pruub 1156.685668945s@ mbc={}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:48 localhost ceph-osd[31357]: osd.2 pg_epoch: 55 pg[7.7( v 34'39 (0'0,34'39] local-lis/les=48/49 n=1 ec=42/32 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=13.773110390s) [2,1,3] r=0 lpr=55 pi=[48,55)/1 crt=34'39 mlcod 0'0 unknown pruub 1156.685668945s@ mbc={}] state: transitioning to Primary Feb 1 02:59:48 localhost ceph-osd[31357]: osd.2 pg_epoch: 55 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=48/49 n=1 ec=42/32 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=13.772881508s) [2,1,3] r=0 lpr=55 pi=[48,55)/1 crt=34'39 mlcod 0'0 unknown pruub 1156.685668945s@ mbc={}] state: transitioning to Primary Feb 1 02:59:48 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.e deep-scrub starts Feb 1 02:59:49 localhost python3[56121]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:59:49 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.e deep-scrub ok Feb 1 02:59:49 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.14 scrub starts Feb 1 02:59:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 02:59:49 localhost podman[56165]: 2026-02-01 07:59:49.348539374 +0000 UTC m=+0.089585690 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public) Feb 1 02:59:49 localhost python3[56164]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932788.7011445-91478-207462224252359/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=9a0c41ba35379304dc7e57883346ea3531963e9b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:59:49 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.14 scrub ok Feb 1 02:59:49 localhost podman[56165]: 2026-02-01 07:59:49.563941953 +0000 UTC m=+0.304988199 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 02:59:49 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 02:59:49 localhost ceph-osd[31357]: osd.2 pg_epoch: 56 pg[7.7( v 34'39 (0'0,34'39] local-lis/les=55/56 n=1 ec=42/32 lis/c=48/48 les/c/f=49/49/0 sis=55) [2,1,3] r=0 lpr=55 pi=[48,55)/1 crt=34'39 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Feb 1 02:59:49 localhost ceph-osd[31357]: osd.2 pg_epoch: 56 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=55/56 n=1 ec=42/32 lis/c=48/48 les/c/f=49/49/0 sis=55) [2,1,3] r=0 lpr=55 pi=[48,55)/1 crt=34'39 mlcod 0'0 active+degraded mbc={255={(2+1)=3}}] state: react AllReplicasActivated Activating complete Feb 1 02:59:49 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.2 scrub starts Feb 1 02:59:50 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.2 scrub ok Feb 1 02:59:50 localhost ceph-osd[32318]: osd.5 pg_epoch: 57 pg[7.8( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.852387428s) [3,2,1] r=-1 lpr=57 pi=[42,57)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1156.272338867s@ mbc={}] start_peering_interval up [5,1,3] -> [3,2,1], acting [5,1,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:50 localhost ceph-osd[32318]: osd.5 pg_epoch: 57 pg[7.8( v 34'39 (0'0,34'39] local-lis/les=42/43 n=1 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.852313042s) [3,2,1] r=-1 lpr=57 pi=[42,57)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1156.272338867s@ mbc={}] state: transitioning to Stray Feb 1 02:59:50 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.7 deep-scrub starts Feb 1 02:59:50 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.7 deep-scrub ok Feb 1 02:59:51 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.1c scrub starts Feb 1 02:59:51 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.1c scrub ok Feb 1 02:59:51 localhost ceph-osd[31357]: osd.2 pg_epoch: 57 pg[7.8( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=57) [3,2,1] r=1 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 02:59:52 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.1a scrub starts Feb 1 02:59:52 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.1a scrub ok Feb 1 02:59:53 localhost ceph-osd[31357]: osd.2 pg_epoch: 59 pg[7.9( v 34'39 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/44 les/c/f=45/45/0 sis=59 pruub=12.394413948s) [0,4,2] r=2 lpr=59 pi=[44,59)/1 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1160.018554688s@ mbc={}] start_peering_interval up [2,1,3] -> [0,4,2], acting [2,1,3] -> [0,4,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:53 localhost ceph-osd[31357]: osd.2 pg_epoch: 59 pg[7.9( v 34'39 (0'0,34'39] local-lis/les=44/45 n=1 ec=42/32 lis/c=44/44 les/c/f=45/45/0 sis=59 pruub=12.394232750s) [0,4,2] r=2 lpr=59 pi=[44,59)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1160.018554688s@ mbc={}] state: transitioning to Stray Feb 1 02:59:54 localhost python3[56253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:59:54 localhost python3[56296]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932793.9390342-91478-104316381665708/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=c332e57191fea146df898938173f766e25b9bcd9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:59:55 localhost ceph-osd[32318]: osd.5 pg_epoch: 61 pg[7.a( v 34'39 (0'0,34'39] local-lis/les=46/47 n=1 ec=42/32 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=12.825468063s) [4,0,5] r=2 lpr=61 pi=[46,61)/1 luod=0'0 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1158.093627930s@ mbc={}] start_peering_interval up [3,1,5] -> [4,0,5], acting [3,1,5] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:55 localhost ceph-osd[32318]: osd.5 pg_epoch: 61 pg[7.a( v 34'39 (0'0,34'39] local-lis/les=46/47 n=1 ec=42/32 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=12.825372696s) [4,0,5] r=2 lpr=61 pi=[46,61)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1158.093627930s@ mbc={}] state: transitioning to Stray Feb 1 02:59:55 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 4.5 scrub starts Feb 1 02:59:55 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 4.5 scrub ok Feb 1 02:59:56 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.a scrub starts Feb 1 02:59:56 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.a scrub ok Feb 1 02:59:57 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.7 scrub starts Feb 1 02:59:58 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.7 scrub ok Feb 1 02:59:58 localhost python3[56358]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 02:59:58 localhost ceph-osd[32318]: osd.5 pg_epoch: 64 pg[7.c( v 34'39 (0'0,34'39] local-lis/les=50/51 n=1 ec=42/32 lis/c=50/50 les/c/f=51/51/0 sis=64 pruub=9.910997391s) [2,3,4] r=-1 lpr=64 pi=[50,64)/1 luod=0'0 crt=34'39 lcod 0'0 mlcod 0'0 active pruub 1158.308227539s@ TIME_FOR_DEEP mbc={}] start_peering_interval up [0,5,4] -> [2,3,4], acting [0,5,4] -> [2,3,4], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 02:59:58 localhost ceph-osd[32318]: osd.5 pg_epoch: 64 pg[7.c( v 34'39 (0'0,34'39] local-lis/les=50/51 n=1 ec=42/32 lis/c=50/50 les/c/f=51/51/0 sis=64 pruub=9.910633087s) [2,3,4] r=-1 lpr=64 pi=[50,64)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1158.308227539s@ TIME_FOR_DEEP mbc={}] state: transitioning to Stray Feb 1 02:59:58 localhost ceph-osd[31357]: osd.2 pg_epoch: 64 pg[7.c( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=50/50 les/c/f=51/51/0 sis=64) [2,3,4] r=0 lpr=64 pi=[50,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 1 02:59:59 localhost python3[56403]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932798.294558-91839-218751065619437/source _original_basename=tmpq3joktrv follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 02:59:59 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.1b scrub starts Feb 1 02:59:59 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.1b scrub ok Feb 1 02:59:59 localhost ceph-osd[31357]: osd.2 pg_epoch: 65 pg[7.c( v 34'39 lc 34'17 (0'0,34'39] local-lis/les=64/65 n=1 ec=42/32 lis/c=50/50 les/c/f=51/51/0 sis=64) [2,3,4] r=0 lpr=64 pi=[50,64)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Feb 1 02:59:59 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.9 scrub starts Feb 1 03:00:00 localhost python3[56465]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:00:00 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.9 scrub ok Feb 1 03:00:00 localhost python3[56508]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932799.8762736-91927-236625377719175/source _original_basename=tmph1rbryab follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:00 localhost ceph-osd[31357]: osd.2 pg_epoch: 66 pg[7.d( v 34'39 (0'0,34'39] local-lis/les=52/53 n=1 ec=42/32 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=9.914355278s) [2,3,1] r=0 lpr=66 pi=[52,66)/1 luod=0'0 crt=34'39 mlcod 0'0 active pruub 1164.864868164s@ mbc={}] start_peering_interval up [4,0,2] -> [2,3,1], acting [4,0,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 03:00:00 localhost ceph-osd[31357]: osd.2 pg_epoch: 66 pg[7.d( v 34'39 (0'0,34'39] local-lis/les=52/53 n=1 ec=42/32 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=9.914355278s) [2,3,1] r=0 lpr=66 pi=[52,66)/1 crt=34'39 mlcod 0'0 unknown pruub 1164.864868164s@ mbc={}] state: transitioning to Primary Feb 1 03:00:01 localhost python3[56538]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Feb 1 03:00:01 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts Feb 1 03:00:01 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok Feb 1 03:00:01 localhost python3[56556]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:00:01 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.7 scrub starts Feb 1 03:00:01 localhost ceph-osd[31357]: osd.2 pg_epoch: 67 pg[7.d( v 34'39 (0'0,34'39] local-lis/les=66/67 n=1 ec=42/32 lis/c=52/52 les/c/f=53/53/0 sis=66) [2,3,1] r=0 lpr=66 pi=[52,66)/1 crt=34'39 mlcod 0'0 active+degraded mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Feb 1 03:00:01 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.7 scrub ok Feb 1 03:00:02 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.2 scrub starts Feb 1 03:00:02 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.2 scrub ok Feb 1 03:00:02 localhost ansible-async_wrapper.py[56728]: Invoked with 185955943630 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1769932802.4000807-92012-270336552753347/AnsiballZ_command.py _ Feb 1 03:00:02 localhost ansible-async_wrapper.py[56731]: Starting module and watcher Feb 1 03:00:02 localhost ansible-async_wrapper.py[56731]: Start watching 56732 (3600) Feb 1 03:00:02 localhost ansible-async_wrapper.py[56732]: Start module (56732) Feb 1 03:00:02 localhost ansible-async_wrapper.py[56728]: Return async_wrapper task started. Feb 1 03:00:03 localhost ceph-osd[31357]: osd.2 pg_epoch: 68 pg[7.e( v 34'39 (0'0,34'39] local-lis/les=54/55 n=1 ec=42/32 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.451250076s) [3,1,5] r=-1 lpr=68 pi=[54,68)/1 luod=0'0 crt=34'39 mlcod 0'0 active pruub 1166.962158203s@ mbc={}] start_peering_interval up [0,2,4] -> [3,1,5], acting [0,2,4] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 03:00:03 localhost ceph-osd[31357]: osd.2 pg_epoch: 68 pg[7.e( v 34'39 (0'0,34'39] local-lis/les=54/55 n=1 ec=42/32 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.451083183s) [3,1,5] r=-1 lpr=68 pi=[54,68)/1 crt=34'39 mlcod 0'0 unknown NOTIFY pruub 1166.962158203s@ mbc={}] state: transitioning to Stray Feb 1 03:00:03 localhost python3[56749]: ansible-ansible.legacy.async_status Invoked with jid=185955943630.56728 mode=status _async_dir=/tmp/.ansible_async Feb 1 03:00:03 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.3 scrub starts Feb 1 03:00:03 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.3 scrub ok Feb 1 03:00:04 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.8 scrub starts Feb 1 03:00:04 localhost ceph-osd[32318]: osd.5 pg_epoch: 68 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=54/54 les/c/f=55/55/0 sis=68) [3,1,5] r=2 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 03:00:04 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 2.8 scrub ok Feb 1 03:00:05 localhost ceph-osd[31357]: osd.2 pg_epoch: 70 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=55/56 n=1 ec=42/32 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=8.112198830s) [0,4,5] r=-1 lpr=70 pi=[55,70)/1 crt=34'39 mlcod 0'0 active pruub 1167.941162109s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [0,4,5], acting [2,1,3] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 1 03:00:05 localhost ceph-osd[31357]: osd.2 pg_epoch: 70 pg[7.f( v 34'39 (0'0,34'39] local-lis/les=55/56 n=1 ec=42/32 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=8.112115860s) [0,4,5] r=-1 lpr=70 pi=[55,70)/1 crt=34'39 mlcod 0'0 unknown NOTIFY pruub 1167.941162109s@ mbc={}] state: transitioning to Stray Feb 1 03:00:06 localhost ceph-osd[32318]: osd.5 pg_epoch: 70 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=55/55 les/c/f=56/56/0 sis=70) [0,4,5] r=2 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 1 03:00:06 localhost puppet-user[56752]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 03:00:06 localhost puppet-user[56752]: (file: /etc/puppet/hiera.yaml) Feb 1 03:00:06 localhost puppet-user[56752]: Warning: Undefined variable '::deploy_config_name'; Feb 1 03:00:06 localhost puppet-user[56752]: (file & line not available) Feb 1 03:00:06 localhost puppet-user[56752]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 03:00:06 localhost puppet-user[56752]: (file & line not available) Feb 1 03:00:06 localhost puppet-user[56752]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 1 03:00:06 localhost puppet-user[56752]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 1 03:00:06 localhost puppet-user[56752]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.11 seconds Feb 1 03:00:07 localhost puppet-user[56752]: Notice: Applied catalog in 0.03 seconds Feb 1 03:00:07 localhost puppet-user[56752]: Application: Feb 1 03:00:07 localhost puppet-user[56752]: Initial environment: production Feb 1 03:00:07 localhost puppet-user[56752]: Converged environment: production Feb 1 03:00:07 localhost puppet-user[56752]: Run mode: user Feb 1 03:00:07 localhost puppet-user[56752]: Changes: Feb 1 03:00:07 localhost puppet-user[56752]: Events: Feb 1 03:00:07 localhost puppet-user[56752]: Resources: Feb 1 03:00:07 localhost puppet-user[56752]: Total: 10 Feb 1 03:00:07 localhost puppet-user[56752]: Time: Feb 1 03:00:07 localhost puppet-user[56752]: Schedule: 0.00 Feb 1 03:00:07 localhost puppet-user[56752]: File: 0.00 Feb 1 03:00:07 localhost puppet-user[56752]: Exec: 0.01 Feb 1 03:00:07 localhost puppet-user[56752]: Augeas: 0.01 Feb 1 03:00:07 localhost puppet-user[56752]: Transaction evaluation: 0.03 Feb 1 03:00:07 localhost puppet-user[56752]: Catalog application: 0.03 Feb 1 03:00:07 localhost puppet-user[56752]: Config retrieval: 0.14 Feb 1 03:00:07 localhost puppet-user[56752]: Last run: 1769932807 Feb 1 03:00:07 localhost puppet-user[56752]: Filebucket: 0.00 Feb 1 03:00:07 localhost puppet-user[56752]: Total: 0.04 Feb 1 03:00:07 localhost puppet-user[56752]: Version: Feb 1 03:00:07 localhost puppet-user[56752]: Config: 1769932806 Feb 1 03:00:07 localhost puppet-user[56752]: Puppet: 7.10.0 Feb 1 03:00:07 localhost ansible-async_wrapper.py[56732]: Module complete (56732) Feb 1 03:00:07 localhost ansible-async_wrapper.py[56731]: Done in kid B. Feb 1 03:00:08 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.13 scrub starts Feb 1 03:00:08 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.13 scrub ok Feb 1 03:00:09 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts Feb 1 03:00:09 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok Feb 1 03:00:10 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.11 scrub starts Feb 1 03:00:10 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.11 scrub ok Feb 1 03:00:10 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts Feb 1 03:00:10 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok Feb 1 03:00:11 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.16 scrub starts Feb 1 03:00:11 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 2.16 scrub ok Feb 1 03:00:13 localhost python3[56955]: ansible-ansible.legacy.async_status Invoked with jid=185955943630.56728 mode=status _async_dir=/tmp/.ansible_async Feb 1 03:00:14 localhost python3[56971]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 03:00:14 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.2 scrub starts Feb 1 03:00:14 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.2 scrub ok Feb 1 03:00:14 localhost python3[56987]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:00:15 localhost python3[57037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:00:15 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.16 scrub starts Feb 1 03:00:15 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 3.16 scrub ok Feb 1 03:00:15 localhost python3[57055]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpdlejn6da recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 03:00:15 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.f scrub starts Feb 1 03:00:15 localhost python3[57085]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:15 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.f scrub ok Feb 1 03:00:16 localhost python3[57189]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 1 03:00:17 localhost python3[57208]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:18 localhost python3[57240]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:00:19 localhost python3[57290]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:00:19 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts Feb 1 03:00:19 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok Feb 1 03:00:19 localhost python3[57308]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:19 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.1c scrub starts Feb 1 03:00:19 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.1c scrub ok Feb 1 03:00:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:00:19 localhost podman[57370]: 2026-02-01 08:00:19.744403745 +0000 UTC m=+0.081338570 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, architecture=x86_64, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z) Feb 1 03:00:19 localhost python3[57371]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:00:19 localhost podman[57370]: 2026-02-01 08:00:19.933757155 +0000 UTC m=+0.270692020 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:10:14Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:00:19 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:00:20 localhost python3[57416]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:20 localhost python3[57482]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:00:20 localhost python3[57500]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:21 localhost python3[57562]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:00:21 localhost python3[57580]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:22 localhost python3[57610]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:00:22 localhost systemd[1]: Reloading. Feb 1 03:00:22 localhost systemd-sysv-generator[57640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:00:22 localhost systemd-rc-local-generator[57634]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:00:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:00:23 localhost python3[57696]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:00:23 localhost python3[57714]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:23 localhost python3[57776]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:00:24 localhost python3[57794]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:24 localhost python3[57824]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:00:24 localhost systemd[1]: Reloading. Feb 1 03:00:24 localhost systemd-sysv-generator[57856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:00:24 localhost systemd-rc-local-generator[57852]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:00:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:00:24 localhost systemd[1]: Starting Create netns directory... Feb 1 03:00:24 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 03:00:24 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 03:00:24 localhost systemd[1]: Finished Create netns directory. Feb 1 03:00:25 localhost python3[57883]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 1 03:00:25 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts Feb 1 03:00:25 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok Feb 1 03:00:27 localhost python3[57940]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 1 03:00:27 localhost podman[58003]: 2026-02-01 08:00:27.333396107 +0000 UTC m=+0.080581997 container create 0605aa4760a862e958c0fb6713ff69acd745a7a6a9c31cccc8745e77505c3787 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, url=https://www.redhat.com, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtqemud_init_logs, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, build-date=2026-01-12T23:31:49Z, vcs-type=git, config_id=tripleo_step2, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, name=rhosp-rhel9/openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:00:27 localhost systemd[1]: Started libpod-conmon-0605aa4760a862e958c0fb6713ff69acd745a7a6a9c31cccc8745e77505c3787.scope. Feb 1 03:00:27 localhost podman[58003]: 2026-02-01 08:00:27.286592045 +0000 UTC m=+0.033777965 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:00:27 localhost systemd[1]: Started libcrun container. Feb 1 03:00:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc97c01fd5eabbf2d8e0d9991f11a9043512db93b5f6f0454866fe7414277f1/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Feb 1 03:00:27 localhost podman[58003]: 2026-02-01 08:00:27.417350119 +0000 UTC m=+0.164535989 container init 0605aa4760a862e958c0fb6713ff69acd745a7a6a9c31cccc8745e77505c3787 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtqemud_init_logs, build-date=2026-01-12T23:31:49Z, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step2, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team) Feb 1 03:00:27 localhost podman[58003]: 2026-02-01 08:00:27.433017042 +0000 UTC m=+0.180202932 container start 0605aa4760a862e958c0fb6713ff69acd745a7a6a9c31cccc8745e77505c3787 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, managed_by=tripleo_ansible, config_id=tripleo_step2, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, container_name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:00:27 localhost python3[57940]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1769931690 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Feb 1 03:00:27 localhost systemd[1]: libpod-0605aa4760a862e958c0fb6713ff69acd745a7a6a9c31cccc8745e77505c3787.scope: Deactivated successfully. Feb 1 03:00:27 localhost podman[58024]: 2026-02-01 08:00:27.450446351 +0000 UTC m=+0.144515600 container create 023b656bbb0901ce30f777777a74fde646d7cce4b5244f472a7e788e575157dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, version=17.1.13, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step2, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute_init_log) Feb 1 03:00:27 localhost systemd[1]: Started libpod-conmon-023b656bbb0901ce30f777777a74fde646d7cce4b5244f472a7e788e575157dd.scope. Feb 1 03:00:27 localhost podman[58024]: 2026-02-01 08:00:27.39864214 +0000 UTC m=+0.092711419 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 1 03:00:27 localhost systemd[1]: Started libcrun container. Feb 1 03:00:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e81b03279955a60a1adecef9798de6e2f56144145c95c44327ebc53e7747a37/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:00:27 localhost podman[58043]: 2026-02-01 08:00:27.518686828 +0000 UTC m=+0.065559554 container died 0605aa4760a862e958c0fb6713ff69acd745a7a6a9c31cccc8745e77505c3787 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1766032510, container_name=nova_virtqemud_init_logs, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:00:27 localhost podman[58049]: 2026-02-01 08:00:27.591810159 +0000 UTC m=+0.126890644 container cleanup 0605aa4760a862e958c0fb6713ff69acd745a7a6a9c31cccc8745e77505c3787 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, container_name=nova_virtqemud_init_logs, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-libvirt-container, release=1766032510) Feb 1 03:00:27 localhost systemd[1]: libpod-conmon-0605aa4760a862e958c0fb6713ff69acd745a7a6a9c31cccc8745e77505c3787.scope: Deactivated successfully. Feb 1 03:00:27 localhost podman[58024]: 2026-02-01 08:00:27.619203511 +0000 UTC m=+0.313272770 container init 023b656bbb0901ce30f777777a74fde646d7cce4b5244f472a7e788e575157dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, container_name=nova_compute_init_log, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:00:27 localhost podman[58024]: 2026-02-01 08:00:27.627796942 +0000 UTC m=+0.321866191 container start 023b656bbb0901ce30f777777a74fde646d7cce4b5244f472a7e788e575157dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute_init_log, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, version=17.1.13) Feb 1 03:00:27 localhost python3[57940]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1769931690 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Feb 1 03:00:27 localhost systemd[1]: libpod-023b656bbb0901ce30f777777a74fde646d7cce4b5244f472a7e788e575157dd.scope: Deactivated successfully. Feb 1 03:00:27 localhost podman[58086]: 2026-02-01 08:00:27.704992582 +0000 UTC m=+0.057153211 container died 023b656bbb0901ce30f777777a74fde646d7cce4b5244f472a7e788e575157dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13) Feb 1 03:00:27 localhost podman[58092]: 2026-02-01 08:00:27.741333755 +0000 UTC m=+0.076894691 container cleanup 023b656bbb0901ce30f777777a74fde646d7cce4b5244f472a7e788e575157dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, version=17.1.13, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public) Feb 1 03:00:27 localhost systemd[1]: libpod-conmon-023b656bbb0901ce30f777777a74fde646d7cce4b5244f472a7e788e575157dd.scope: Deactivated successfully. Feb 1 03:00:28 localhost podman[58186]: 2026-02-01 08:00:28.123062269 +0000 UTC m=+0.090007774 container create 09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=create_haproxy_wrapper, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1766032510, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:00:28 localhost systemd[1]: Started libpod-conmon-09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315.scope. Feb 1 03:00:28 localhost systemd[1]: Started libcrun container. Feb 1 03:00:28 localhost podman[58186]: 2026-02-01 08:00:28.078220417 +0000 UTC m=+0.045165972 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 1 03:00:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2933f5278c3e34f217deba3df65be56d0deb9a26e06617aee0cea81e2014367/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 03:00:28 localhost podman[58209]: 2026-02-01 08:00:28.19205868 +0000 UTC m=+0.085044418 container create 3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, container_name=create_virtlogd_wrapper, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=) Feb 1 03:00:28 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.1f scrub starts Feb 1 03:00:28 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.1f scrub ok Feb 1 03:00:28 localhost systemd[1]: Started libpod-conmon-3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58.scope. Feb 1 03:00:28 localhost systemd[1]: Started libcrun container. Feb 1 03:00:28 localhost podman[58186]: 2026-02-01 08:00:28.244621544 +0000 UTC m=+0.211567049 container init 09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step2, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, architecture=x86_64, container_name=create_haproxy_wrapper, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z) Feb 1 03:00:28 localhost podman[58209]: 2026-02-01 08:00:28.148954833 +0000 UTC m=+0.041940601 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:00:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50254bf8e87a075d183197f5531e6c0f97888346b53b5d118b5ece2506404cbc/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Feb 1 03:00:28 localhost podman[58186]: 2026-02-01 08:00:28.254377161 +0000 UTC m=+0.221322666 container start 09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, distribution-scope=public, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=create_haproxy_wrapper, tcib_managed=true, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com) Feb 1 03:00:28 localhost podman[58186]: 2026-02-01 08:00:28.254893787 +0000 UTC m=+0.221839332 container attach 09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, version=17.1.13, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=create_haproxy_wrapper, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:00:28 localhost podman[58209]: 2026-02-01 08:00:28.306015006 +0000 UTC m=+0.199000764 container init 3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_step2, build-date=2026-01-12T23:31:49Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=create_virtlogd_wrapper, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Feb 1 03:00:28 localhost podman[58209]: 2026-02-01 08:00:28.314460062 +0000 UTC m=+0.207445830 container start 3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-libvirt, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=create_virtlogd_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com) Feb 1 03:00:28 localhost podman[58209]: 2026-02-01 08:00:28.314789523 +0000 UTC m=+0.207775281 container attach 3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, container_name=create_virtlogd_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, tcib_managed=true) Feb 1 03:00:28 localhost systemd[1]: var-lib-containers-storage-overlay-0bc97c01fd5eabbf2d8e0d9991f11a9043512db93b5f6f0454866fe7414277f1-merged.mount: Deactivated successfully. Feb 1 03:00:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0605aa4760a862e958c0fb6713ff69acd745a7a6a9c31cccc8745e77505c3787-userdata-shm.mount: Deactivated successfully. Feb 1 03:00:29 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.d scrub starts Feb 1 03:00:29 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.d scrub ok Feb 1 03:00:29 localhost ovs-vsctl[58323]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Feb 1 03:00:30 localhost systemd[1]: libpod-3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58.scope: Deactivated successfully. Feb 1 03:00:30 localhost systemd[1]: libpod-3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58.scope: Consumed 2.049s CPU time. Feb 1 03:00:30 localhost podman[58209]: 2026-02-01 08:00:30.366927595 +0000 UTC m=+2.259913363 container died 3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.openshift.expose-services=, config_id=tripleo_step2, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=create_virtlogd_wrapper, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:00:30 localhost systemd[1]: tmp-crun.sfDeCW.mount: Deactivated successfully. Feb 1 03:00:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58-userdata-shm.mount: Deactivated successfully. Feb 1 03:00:30 localhost podman[58449]: 2026-02-01 08:00:30.477572457 +0000 UTC m=+0.097179179 container cleanup 3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, release=1766032510, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, container_name=create_virtlogd_wrapper, build-date=2026-01-12T23:31:49Z, architecture=x86_64, config_id=tripleo_step2, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public) Feb 1 03:00:30 localhost systemd[1]: libpod-conmon-3738f4d9f86def7281cb8a2a5f2901e3ed40bf88ad618b18e33db25a08a07b58.scope: Deactivated successfully. Feb 1 03:00:30 localhost python3[57940]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1769931690 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Feb 1 03:00:31 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.1 scrub starts Feb 1 03:00:31 localhost systemd[1]: libpod-09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315.scope: Deactivated successfully. Feb 1 03:00:31 localhost systemd[1]: libpod-09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315.scope: Consumed 2.115s CPU time. Feb 1 03:00:31 localhost podman[58186]: 2026-02-01 08:00:31.309054544 +0000 UTC m=+3.276000029 container died 09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=create_haproxy_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, release=1766032510, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step2, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 1 03:00:31 localhost podman[58490]: 2026-02-01 08:00:31.380274116 +0000 UTC m=+0.060801215 container cleanup 09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, config_id=tripleo_step2, batch=17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-type=git, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=create_haproxy_wrapper, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible) Feb 1 03:00:31 localhost systemd[1]: libpod-conmon-09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315.scope: Deactivated successfully. Feb 1 03:00:31 localhost python3[57940]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Feb 1 03:00:31 localhost systemd[1]: var-lib-containers-storage-overlay-50254bf8e87a075d183197f5531e6c0f97888346b53b5d118b5ece2506404cbc-merged.mount: Deactivated successfully. Feb 1 03:00:31 localhost systemd[1]: var-lib-containers-storage-overlay-b2933f5278c3e34f217deba3df65be56d0deb9a26e06617aee0cea81e2014367-merged.mount: Deactivated successfully. Feb 1 03:00:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09968e3f6820d539a0f919b784a87aaed2031785c033ca7e095dfdf925256315-userdata-shm.mount: Deactivated successfully. Feb 1 03:00:31 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.1 scrub ok Feb 1 03:00:31 localhost python3[58543]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:32 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts Feb 1 03:00:32 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok Feb 1 03:00:33 localhost python3[58664]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005604215 step=2 update_config_hash_only=False Feb 1 03:00:34 localhost python3[58680]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:00:34 localhost python3[58696]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 1 03:00:34 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.5 scrub starts Feb 1 03:00:34 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.5 scrub ok Feb 1 03:00:35 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.10 scrub starts Feb 1 03:00:35 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.10 scrub ok Feb 1 03:00:37 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.a scrub starts Feb 1 03:00:37 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 3.a scrub ok Feb 1 03:00:38 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.18 scrub starts Feb 1 03:00:38 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 5.18 scrub ok Feb 1 03:00:38 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.15 scrub starts Feb 1 03:00:38 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 5.15 scrub ok Feb 1 03:00:41 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 6.15 scrub starts Feb 1 03:00:41 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 6.15 scrub ok Feb 1 03:00:43 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.13 scrub starts Feb 1 03:00:43 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.13 scrub ok Feb 1 03:00:44 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 4.c scrub starts Feb 1 03:00:44 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 4.c scrub ok Feb 1 03:00:45 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.1a scrub starts Feb 1 03:00:45 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.1a scrub ok Feb 1 03:00:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:00:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4860 writes, 21K keys, 4860 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4860 writes, 515 syncs, 9.44 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1464 writes, 5264 keys, 1464 commit groups, 1.0 writes per commit group, ingest: 2.32 MB, 0.00 MB/s#012Interval WAL: 1464 writes, 314 syncs, 4.66 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aabc1aa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aabc1aa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me Feb 1 03:00:46 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.19 scrub starts Feb 1 03:00:46 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.19 scrub ok Feb 1 03:00:47 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.3 scrub starts Feb 1 03:00:47 localhost ceph-osd[32318]: log_channel(cluster) log [DBG] : 6.3 scrub ok Feb 1 03:00:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:00:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4708 writes, 21K keys, 4708 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4708 writes, 468 syncs, 10.06 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1461 writes, 5079 keys, 1461 commit groups, 1.0 writes per commit group, ingest: 2.28 MB, 0.00 MB/s#012Interval WAL: 1461 writes, 329 syncs, 4.44 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55797eeb82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55797eeb82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Feb 1 03:00:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:00:50 localhost systemd[1]: tmp-crun.1rA6p2.mount: Deactivated successfully. Feb 1 03:00:50 localhost podman[58697]: 2026-02-01 08:00:50.888753804 +0000 UTC m=+0.096683414 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com) Feb 1 03:00:51 localhost podman[58697]: 2026-02-01 08:00:51.085931629 +0000 UTC m=+0.293861219 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, tcib_managed=true, io.buildah.version=1.41.5, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1766032510, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 1 03:00:51 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:00:52 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 6.8 scrub starts Feb 1 03:00:52 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 6.8 scrub ok Feb 1 03:00:55 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 6.d scrub starts Feb 1 03:00:55 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 6.d scrub ok Feb 1 03:00:57 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.d scrub starts Feb 1 03:00:57 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.d scrub ok Feb 1 03:01:10 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.a scrub starts Feb 1 03:01:10 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.a scrub ok Feb 1 03:01:11 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.e scrub starts Feb 1 03:01:11 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.e scrub ok Feb 1 03:01:12 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.1a scrub starts Feb 1 03:01:12 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.1a scrub ok Feb 1 03:01:18 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.18 scrub starts Feb 1 03:01:18 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.18 scrub ok Feb 1 03:01:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:01:21 localhost systemd[1]: tmp-crun.EYk5DB.mount: Deactivated successfully. Feb 1 03:01:21 localhost podman[58865]: 2026-02-01 08:01:21.879579406 +0000 UTC m=+0.093761417 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.expose-services=, version=17.1.13, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:01:22 localhost podman[58865]: 2026-02-01 08:01:22.139684137 +0000 UTC m=+0.353866128 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 03:01:22 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:01:26 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.1b scrub starts Feb 1 03:01:26 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 4.1b scrub ok Feb 1 03:01:27 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 7.1 scrub starts Feb 1 03:01:27 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 7.1 scrub ok Feb 1 03:01:28 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts Feb 1 03:01:28 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok Feb 1 03:01:29 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 7.c scrub starts Feb 1 03:01:29 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 7.c scrub ok Feb 1 03:01:31 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 7.d scrub starts Feb 1 03:01:31 localhost ceph-osd[31357]: log_channel(cluster) log [DBG] : 7.d scrub ok Feb 1 03:01:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:01:52 localhost podman[58894]: 2026-02-01 08:01:52.870597049 +0000 UTC m=+0.081965511 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step1) Feb 1 03:01:53 localhost podman[58894]: 2026-02-01 08:01:53.095573902 +0000 UTC m=+0.306942364 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, container_name=metrics_qdr) Feb 1 03:01:53 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:02:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:02:23 localhost podman[59000]: 2026-02-01 08:02:23.870014934 +0000 UTC m=+0.084156629 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 1 03:02:24 localhost podman[59000]: 2026-02-01 08:02:24.082716677 +0000 UTC m=+0.296858412 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, version=17.1.13, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:02:24 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:02:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:02:54 localhost podman[59029]: 2026-02-01 08:02:54.860752267 +0000 UTC m=+0.076853462 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:02:55 localhost podman[59029]: 2026-02-01 08:02:55.049872289 +0000 UTC m=+0.265973464 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:02:55 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:03:15 localhost podman[59161]: 2026-02-01 08:03:15.162516554 +0000 UTC m=+0.098901557 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, architecture=x86_64, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:03:15 localhost podman[59161]: 2026-02-01 08:03:15.26527274 +0000 UTC m=+0.201657713 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 03:03:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:03:25 localhost podman[59304]: 2026-02-01 08:03:25.879467088 +0000 UTC m=+0.088438551 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 1 03:03:26 localhost podman[59304]: 2026-02-01 08:03:26.080670245 +0000 UTC m=+0.289641768 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 1 03:03:26 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:03:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:03:56 localhost systemd[1]: tmp-crun.gGfqhc.mount: Deactivated successfully. Feb 1 03:03:56 localhost podman[59333]: 2026-02-01 08:03:56.878365047 +0000 UTC m=+0.095668925 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:03:57 localhost podman[59333]: 2026-02-01 08:03:57.068719296 +0000 UTC m=+0.286023164 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:03:57 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:04:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:04:27 localhost systemd[1]: tmp-crun.PVjT6d.mount: Deactivated successfully. Feb 1 03:04:27 localhost podman[59439]: 2026-02-01 08:04:27.884859142 +0000 UTC m=+0.099711132 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., tcib_managed=true) Feb 1 03:04:28 localhost podman[59439]: 2026-02-01 08:04:28.080888848 +0000 UTC m=+0.295740868 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, release=1766032510, container_name=metrics_qdr, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:04:28 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:04:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:04:58 localhost podman[59468]: 2026-02-01 08:04:58.857882328 +0000 UTC m=+0.073990770 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1766032510, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, architecture=x86_64, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:04:59 localhost podman[59468]: 2026-02-01 08:04:59.071772981 +0000 UTC m=+0.287881383 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:04:59 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:05:10 localhost python3[59546]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:05:11 localhost python3[59591]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933110.2941833-98265-189130431710396/source _original_basename=tmpezp7ee7s follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:12 localhost python3[59621]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:14 localhost ansible-async_wrapper.py[59793]: Invoked with 156375756012 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933113.4942925-98453-246736775600007/AnsiballZ_command.py _ Feb 1 03:05:14 localhost ansible-async_wrapper.py[59796]: Starting module and watcher Feb 1 03:05:14 localhost ansible-async_wrapper.py[59796]: Start watching 59797 (3600) Feb 1 03:05:14 localhost ansible-async_wrapper.py[59797]: Start module (59797) Feb 1 03:05:14 localhost ansible-async_wrapper.py[59793]: Return async_wrapper task started. Feb 1 03:05:14 localhost python3[59817]: ansible-ansible.legacy.async_status Invoked with jid=156375756012.59793 mode=status _async_dir=/tmp/.ansible_async Feb 1 03:05:17 localhost puppet-user[59816]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 03:05:17 localhost puppet-user[59816]: (file: /etc/puppet/hiera.yaml) Feb 1 03:05:17 localhost puppet-user[59816]: Warning: Undefined variable '::deploy_config_name'; Feb 1 03:05:17 localhost puppet-user[59816]: (file & line not available) Feb 1 03:05:17 localhost puppet-user[59816]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 03:05:17 localhost puppet-user[59816]: (file & line not available) Feb 1 03:05:17 localhost puppet-user[59816]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 1 03:05:17 localhost puppet-user[59816]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 1 03:05:17 localhost puppet-user[59816]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.12 seconds Feb 1 03:05:17 localhost puppet-user[59816]: Notice: Applied catalog in 0.04 seconds Feb 1 03:05:17 localhost puppet-user[59816]: Application: Feb 1 03:05:17 localhost puppet-user[59816]: Initial environment: production Feb 1 03:05:17 localhost puppet-user[59816]: Converged environment: production Feb 1 03:05:17 localhost puppet-user[59816]: Run mode: user Feb 1 03:05:17 localhost puppet-user[59816]: Changes: Feb 1 03:05:17 localhost puppet-user[59816]: Events: Feb 1 03:05:17 localhost puppet-user[59816]: Resources: Feb 1 03:05:17 localhost puppet-user[59816]: Total: 10 Feb 1 03:05:17 localhost puppet-user[59816]: Time: Feb 1 03:05:17 localhost puppet-user[59816]: Schedule: 0.00 Feb 1 03:05:17 localhost puppet-user[59816]: File: 0.00 Feb 1 03:05:17 localhost puppet-user[59816]: Exec: 0.01 Feb 1 03:05:17 localhost puppet-user[59816]: Augeas: 0.01 Feb 1 03:05:17 localhost puppet-user[59816]: Transaction evaluation: 0.03 Feb 1 03:05:17 localhost puppet-user[59816]: Catalog application: 0.04 Feb 1 03:05:17 localhost puppet-user[59816]: Config retrieval: 0.15 Feb 1 03:05:17 localhost puppet-user[59816]: Last run: 1769933117 Feb 1 03:05:17 localhost puppet-user[59816]: Filebucket: 0.00 Feb 1 03:05:17 localhost puppet-user[59816]: Total: 0.04 Feb 1 03:05:17 localhost puppet-user[59816]: Version: Feb 1 03:05:17 localhost puppet-user[59816]: Config: 1769933117 Feb 1 03:05:17 localhost puppet-user[59816]: Puppet: 7.10.0 Feb 1 03:05:17 localhost ansible-async_wrapper.py[59797]: Module complete (59797) Feb 1 03:05:19 localhost ansible-async_wrapper.py[59796]: Done in kid B. Feb 1 03:05:24 localhost python3[60021]: ansible-ansible.legacy.async_status Invoked with jid=156375756012.59793 mode=status _async_dir=/tmp/.ansible_async Feb 1 03:05:25 localhost python3[60037]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 03:05:25 localhost python3[60053]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:26 localhost python3[60103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:05:26 localhost python3[60121]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpn8bpg59h recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 03:05:27 localhost python3[60151]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:28 localhost python3[60254]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 1 03:05:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:05:29 localhost systemd[1]: tmp-crun.Ucx95S.mount: Deactivated successfully. Feb 1 03:05:29 localhost podman[60273]: 2026-02-01 08:05:29.434417223 +0000 UTC m=+0.093873451 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, url=https://www.redhat.com, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:05:29 localhost python3[60274]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:29 localhost podman[60273]: 2026-02-01 08:05:29.624674452 +0000 UTC m=+0.284130680 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 1 03:05:29 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:05:30 localhost python3[60334]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:31 localhost python3[60384]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:05:31 localhost python3[60402]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:31 localhost python3[60464]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:05:32 localhost python3[60482]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:32 localhost python3[60544]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:05:32 localhost python3[60562]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:33 localhost python3[60624]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:05:33 localhost python3[60642]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:34 localhost python3[60672]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:05:34 localhost systemd[1]: Reloading. Feb 1 03:05:34 localhost systemd-sysv-generator[60698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:05:34 localhost systemd-rc-local-generator[60693]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:05:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:05:35 localhost python3[60758]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:05:35 localhost python3[60776]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:35 localhost python3[60838]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:05:36 localhost python3[60856]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:36 localhost python3[60886]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:05:36 localhost systemd[1]: Reloading. Feb 1 03:05:36 localhost systemd-rc-local-generator[60909]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:05:36 localhost systemd-sysv-generator[60915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:05:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:05:37 localhost systemd[1]: Starting dnf makecache... Feb 1 03:05:37 localhost systemd[1]: Starting Create netns directory... Feb 1 03:05:37 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 03:05:37 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 03:05:37 localhost systemd[1]: Finished Create netns directory. Feb 1 03:05:37 localhost dnf[60923]: Updating Subscription Management repositories. Feb 1 03:05:37 localhost python3[60943]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 1 03:05:38 localhost dnf[60923]: Metadata cache refreshed recently. Feb 1 03:05:39 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Feb 1 03:05:39 localhost systemd[1]: Finished dnf makecache. Feb 1 03:05:39 localhost systemd[1]: dnf-makecache.service: Consumed 2.064s CPU time. Feb 1 03:05:39 localhost python3[61002]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 1 03:05:39 localhost podman[61151]: 2026-02-01 08:05:39.842575645 +0000 UTC m=+0.057912195 container create e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1766032510, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:05:39 localhost podman[61162]: 2026-02-01 08:05:39.873001588 +0000 UTC m=+0.077223500 container create d52030e77d7c02b05ca2a3f0e1bb43639ffbb98854fc09c2ed39dede099dd2d8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, architecture=x86_64, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:05:39 localhost systemd[1]: Started libpod-conmon-e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.scope. Feb 1 03:05:39 localhost systemd[1]: Started libcrun container. Feb 1 03:05:39 localhost podman[61186]: 2026-02-01 08:05:39.895137541 +0000 UTC m=+0.080675308 container create 1ab3a2c69b685c0c6816908493618cb01de7af949dad63b760fc7780d0aa1e7e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_statedir_owner, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, version=17.1.13) Feb 1 03:05:39 localhost systemd[1]: Started libpod-conmon-d52030e77d7c02b05ca2a3f0e1bb43639ffbb98854fc09c2ed39dede099dd2d8.scope. Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f493ed320f2136eba98c6f6d73d7580e3273443b9599c34d1438e87453daf45/merged/scripts supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f493ed320f2136eba98c6f6d73d7580e3273443b9599c34d1438e87453daf45/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost systemd[1]: Started libcrun container. Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a7790e7cad798695025ef44722873ac2669462e661d130061be9d691861f40/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost podman[61163]: 2026-02-01 08:05:39.90915327 +0000 UTC m=+0.111642668 container create 4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, container_name=nova_virtlogd_wrapper, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:05:39 localhost podman[61151]: 2026-02-01 08:05:39.809996354 +0000 UTC m=+0.025332924 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 1 03:05:39 localhost systemd[1]: Started libpod-conmon-1ab3a2c69b685c0c6816908493618cb01de7af949dad63b760fc7780d0aa1e7e.scope. Feb 1 03:05:39 localhost systemd[1]: Started libcrun container. Feb 1 03:05:39 localhost podman[61162]: 2026-02-01 08:05:39.826390828 +0000 UTC m=+0.030612740 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d315715373fb2ed69473b661022a322c730f5613516f294042e6eac2843e9be/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost podman[61163]: 2026-02-01 08:05:39.828088361 +0000 UTC m=+0.030577749 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d315715373fb2ed69473b661022a322c730f5613516f294042e6eac2843e9be/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d315715373fb2ed69473b661022a322c730f5613516f294042e6eac2843e9be/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost systemd[1]: Started libpod-conmon-4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa.scope. Feb 1 03:05:39 localhost podman[61186]: 2026-02-01 08:05:39.934011109 +0000 UTC m=+0.119548876 container init 1ab3a2c69b685c0c6816908493618cb01de7af949dad63b760fc7780d0aa1e7e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_statedir_owner, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.5, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510) Feb 1 03:05:39 localhost podman[61186]: 2026-02-01 08:05:39.839085536 +0000 UTC m=+0.024623353 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 1 03:05:39 localhost systemd[1]: Started libcrun container. Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost podman[61214]: 2026-02-01 08:05:39.94554943 +0000 UTC m=+0.077352294 container create 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2026-01-12T22:10:09Z, name=rhosp-rhel9/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, release=1766032510, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.5, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog) Feb 1 03:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:39 localhost podman[61163]: 2026-02-01 08:05:39.949575936 +0000 UTC m=+0.152065324 container init 4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtlogd_wrapper, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true) Feb 1 03:05:39 localhost podman[61163]: 2026-02-01 08:05:39.955879793 +0000 UTC m=+0.158369181 container start 4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, container_name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container) Feb 1 03:05:39 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1296029e90a465a2201c8dc6f8be17e7 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:39 localhost podman[61162]: 2026-02-01 08:05:39.964614467 +0000 UTC m=+0.168836369 container init d52030e77d7c02b05ca2a3f0e1bb43639ffbb98854fc09c2ed39dede099dd2d8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, build-date=2026-01-12T23:07:30Z, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20260112.1) Feb 1 03:05:39 localhost podman[61162]: 2026-02-01 08:05:39.971724069 +0000 UTC m=+0.175945981 container start d52030e77d7c02b05ca2a3f0e1bb43639ffbb98854fc09c2ed39dede099dd2d8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, managed_by=tripleo_ansible, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, distribution-scope=public, container_name=ceilometer_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step3, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:05:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:05:39 localhost podman[61151]: 2026-02-01 08:05:39.974658291 +0000 UTC m=+0.189994861 container init e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3) Feb 1 03:05:39 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Feb 1 03:05:39 localhost systemd[1]: libpod-d52030e77d7c02b05ca2a3f0e1bb43639ffbb98854fc09c2ed39dede099dd2d8.scope: Deactivated successfully. Feb 1 03:05:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:05:40 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:05:40 localhost podman[61186]: 2026-02-01 08:05:40.001726899 +0000 UTC m=+0.187264676 container start 1ab3a2c69b685c0c6816908493618cb01de7af949dad63b760fc7780d0aa1e7e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, release=1766032510, version=17.1.13, io.openshift.expose-services=, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:05:40 localhost systemd[1]: Created slice User Slice of UID 0. Feb 1 03:05:40 localhost podman[61186]: 2026-02-01 08:05:40.008040477 +0000 UTC m=+0.193578284 container attach 1ab3a2c69b685c0c6816908493618cb01de7af949dad63b760fc7780d0aa1e7e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, container_name=nova_statedir_owner, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:05:40 localhost podman[61186]: 2026-02-01 08:05:40.012694463 +0000 UTC m=+0.198232260 container died 1ab3a2c69b685c0c6816908493618cb01de7af949dad63b760fc7780d0aa1e7e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step3, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, container_name=nova_statedir_owner) Feb 1 03:05:40 localhost podman[61214]: 2026-02-01 08:05:39.913671331 +0000 UTC m=+0.045474225 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 1 03:05:40 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 1 03:05:40 localhost systemd[1]: libpod-1ab3a2c69b685c0c6816908493618cb01de7af949dad63b760fc7780d0aa1e7e.scope: Deactivated successfully. Feb 1 03:05:40 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:05:40 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 1 03:05:40 localhost systemd[1]: Starting User Manager for UID 0... Feb 1 03:05:40 localhost podman[61151]: 2026-02-01 08:05:40.052214671 +0000 UTC m=+0.267551241 container start e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:05:40 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d31718fcd17fdeee6489534105191c7a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 1 03:05:40 localhost podman[61264]: 2026-02-01 08:05:40.09369807 +0000 UTC m=+0.105600399 container died d52030e77d7c02b05ca2a3f0e1bb43639ffbb98854fc09c2ed39dede099dd2d8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, config_id=tripleo_step3, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:05:40 localhost podman[61284]: 2026-02-01 08:05:40.072028341 +0000 UTC m=+0.067998150 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, managed_by=tripleo_ansible) Feb 1 03:05:40 localhost systemd[1]: Started libpod-conmon-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893.scope. Feb 1 03:05:40 localhost systemd[1]: Started libcrun container. Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost systemd[61307]: Queued start job for default target Main User Target. Feb 1 03:05:40 localhost systemd[61307]: Created slice User Application Slice. Feb 1 03:05:40 localhost systemd[61307]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 1 03:05:40 localhost systemd[61307]: Started Daily Cleanup of User's Temporary Directories. Feb 1 03:05:40 localhost podman[61284]: 2026-02-01 08:05:40.154540445 +0000 UTC m=+0.150510254 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, version=17.1.13, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, container_name=collectd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:05:40 localhost systemd[61307]: Reached target Paths. Feb 1 03:05:40 localhost systemd[61307]: Reached target Timers. Feb 1 03:05:40 localhost systemd[61307]: Starting D-Bus User Message Bus Socket... Feb 1 03:05:40 localhost systemd[61307]: Starting Create User's Volatile Files and Directories... Feb 1 03:05:40 localhost podman[61284]: unhealthy Feb 1 03:05:40 localhost systemd[61307]: Listening on D-Bus User Message Bus Socket. Feb 1 03:05:40 localhost systemd[61307]: Reached target Sockets. Feb 1 03:05:40 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:05:40 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Failed with result 'exit-code'. Feb 1 03:05:40 localhost systemd[61307]: Finished Create User's Volatile Files and Directories. Feb 1 03:05:40 localhost systemd[61307]: Reached target Basic System. Feb 1 03:05:40 localhost systemd[61307]: Reached target Main User Target. Feb 1 03:05:40 localhost systemd[61307]: Startup finished in 112ms. Feb 1 03:05:40 localhost systemd[1]: Started User Manager for UID 0. Feb 1 03:05:40 localhost systemd[1]: Started Session c1 of User root. Feb 1 03:05:40 localhost systemd[1]: Started Session c2 of User root. Feb 1 03:05:40 localhost podman[61297]: 2026-02-01 08:05:40.241147877 +0000 UTC m=+0.219112033 container cleanup 1ab3a2c69b685c0c6816908493618cb01de7af949dad63b760fc7780d0aa1e7e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, container_name=nova_statedir_owner, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step3, tcib_managed=true, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510) Feb 1 03:05:40 localhost systemd[1]: session-c1.scope: Deactivated successfully. Feb 1 03:05:40 localhost systemd[1]: libpod-conmon-1ab3a2c69b685c0c6816908493618cb01de7af949dad63b760fc7780d0aa1e7e.scope: Deactivated successfully. Feb 1 03:05:40 localhost podman[61214]: 2026-02-01 08:05:40.247430725 +0000 UTC m=+0.379233589 container init 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:09Z, container_name=rsyslog, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Feb 1 03:05:40 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1769931690 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Feb 1 03:05:40 localhost systemd[1]: session-c2.scope: Deactivated successfully. Feb 1 03:05:40 localhost podman[61214]: 2026-02-01 08:05:40.27922007 +0000 UTC m=+0.411022954 container start 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, container_name=rsyslog, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container) Feb 1 03:05:40 localhost podman[61272]: 2026-02-01 08:05:40.281739279 +0000 UTC m=+0.288617480 container cleanup d52030e77d7c02b05ca2a3f0e1bb43639ffbb98854fc09c2ed39dede099dd2d8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_init_log, io.openshift.expose-services=, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:05:40 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=52a7bad153b9a3530edb4c6869c1fe7c --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 1 03:05:40 localhost systemd[1]: libpod-conmon-d52030e77d7c02b05ca2a3f0e1bb43639ffbb98854fc09c2ed39dede099dd2d8.scope: Deactivated successfully. Feb 1 03:05:40 localhost systemd[1]: libpod-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893.scope: Deactivated successfully. Feb 1 03:05:40 localhost podman[61462]: 2026-02-01 08:05:40.402097709 +0000 UTC m=+0.031420035 container died 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, version=17.1.13, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:09Z, release=1766032510, config_id=tripleo_step3, url=https://www.redhat.com, build-date=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:05:40 localhost podman[61462]: 2026-02-01 08:05:40.423301122 +0000 UTC m=+0.052623428 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, container_name=rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:09Z, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog) Feb 1 03:05:40 localhost systemd[1]: libpod-conmon-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893.scope: Deactivated successfully. Feb 1 03:05:40 localhost podman[61501]: 2026-02-01 08:05:40.548021679 +0000 UTC m=+0.057328557 container create 80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 1 03:05:40 localhost systemd[1]: Started libpod-conmon-80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5.scope. Feb 1 03:05:40 localhost systemd[1]: Started libcrun container. Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79513147d587ecdcd7bb2edc01bb3b7dc549ee20844dd0dc1e7a6b286443d3ff/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79513147d587ecdcd7bb2edc01bb3b7dc549ee20844dd0dc1e7a6b286443d3ff/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79513147d587ecdcd7bb2edc01bb3b7dc549ee20844dd0dc1e7a6b286443d3ff/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79513147d587ecdcd7bb2edc01bb3b7dc549ee20844dd0dc1e7a6b286443d3ff/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost podman[61501]: 2026-02-01 08:05:40.618606649 +0000 UTC m=+0.127913597 container init 80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt) Feb 1 03:05:40 localhost podman[61501]: 2026-02-01 08:05:40.529648503 +0000 UTC m=+0.038955371 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:40 localhost podman[61501]: 2026-02-01 08:05:40.635107446 +0000 UTC m=+0.144414334 container start 80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 1 03:05:40 localhost systemd[1]: var-lib-containers-storage-overlay-e3a7790e7cad798695025ef44722873ac2669462e661d130061be9d691861f40-merged.mount: Deactivated successfully. Feb 1 03:05:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d52030e77d7c02b05ca2a3f0e1bb43639ffbb98854fc09c2ed39dede099dd2d8-userdata-shm.mount: Deactivated successfully. Feb 1 03:05:40 localhost podman[61578]: 2026-02-01 08:05:40.859017389 +0000 UTC m=+0.082218147 container create a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_virtsecretd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 1 03:05:40 localhost systemd[1]: Started libpod-conmon-a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3.scope. Feb 1 03:05:40 localhost podman[61578]: 2026-02-01 08:05:40.80989558 +0000 UTC m=+0.033096388 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:40 localhost systemd[1]: Started libcrun container. Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe2e37cef3553dd7ed72567236ba15185ae0f96cf280ad9def2a9cdb2b0b4c7/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe2e37cef3553dd7ed72567236ba15185ae0f96cf280ad9def2a9cdb2b0b4c7/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe2e37cef3553dd7ed72567236ba15185ae0f96cf280ad9def2a9cdb2b0b4c7/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe2e37cef3553dd7ed72567236ba15185ae0f96cf280ad9def2a9cdb2b0b4c7/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe2e37cef3553dd7ed72567236ba15185ae0f96cf280ad9def2a9cdb2b0b4c7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe2e37cef3553dd7ed72567236ba15185ae0f96cf280ad9def2a9cdb2b0b4c7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe2e37cef3553dd7ed72567236ba15185ae0f96cf280ad9def2a9cdb2b0b4c7/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:40 localhost podman[61578]: 2026-02-01 08:05:40.93184562 +0000 UTC m=+0.155046388 container init a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:31:49Z, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:05:40 localhost podman[61578]: 2026-02-01 08:05:40.945735155 +0000 UTC m=+0.168935893 container start a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, build-date=2026-01-12T23:31:49Z, container_name=nova_virtsecretd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 1 03:05:40 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1296029e90a465a2201c8dc6f8be17e7 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:40 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:05:40 localhost systemd[1]: Started Session c3 of User root. Feb 1 03:05:41 localhost systemd[1]: session-c3.scope: Deactivated successfully. Feb 1 03:05:41 localhost podman[61711]: 2026-02-01 08:05:41.37600003 +0000 UTC m=+0.070527630 container create 883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtnodedevd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, distribution-scope=public) Feb 1 03:05:41 localhost systemd[1]: Started libpod-conmon-883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff.scope. Feb 1 03:05:41 localhost podman[61727]: 2026-02-01 08:05:41.434862073 +0000 UTC m=+0.085358674 container create 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1766032510, container_name=iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Feb 1 03:05:41 localhost systemd[1]: Started libcrun container. Feb 1 03:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9f50aed1094cdf3c8ae90862135d9821bbb7f673296f42b1c4d115dfdd346a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9f50aed1094cdf3c8ae90862135d9821bbb7f673296f42b1c4d115dfdd346a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9f50aed1094cdf3c8ae90862135d9821bbb7f673296f42b1c4d115dfdd346a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9f50aed1094cdf3c8ae90862135d9821bbb7f673296f42b1c4d115dfdd346a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9f50aed1094cdf3c8ae90862135d9821bbb7f673296f42b1c4d115dfdd346a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9f50aed1094cdf3c8ae90862135d9821bbb7f673296f42b1c4d115dfdd346a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9f50aed1094cdf3c8ae90862135d9821bbb7f673296f42b1c4d115dfdd346a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:41 localhost podman[61711]: 2026-02-01 08:05:41.346021782 +0000 UTC m=+0.040549382 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:41 localhost podman[61711]: 2026-02-01 08:05:41.450161433 +0000 UTC m=+0.144689063 container init 883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, org.opencontainers.image.created=2026-01-12T23:31:49Z, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, container_name=nova_virtnodedevd, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:31:49Z) Feb 1 03:05:41 localhost podman[61711]: 2026-02-01 08:05:41.459573887 +0000 UTC m=+0.154101517 container start 883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, release=1766032510, config_id=tripleo_step3, version=17.1.13, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtnodedevd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z) Feb 1 03:05:41 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1296029e90a465a2201c8dc6f8be17e7 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:41 localhost systemd[1]: Started libpod-conmon-28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.scope. Feb 1 03:05:41 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:05:41 localhost systemd[1]: Started libcrun container. Feb 1 03:05:41 localhost podman[61727]: 2026-02-01 08:05:41.401104486 +0000 UTC m=+0.051601107 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 1 03:05:41 localhost systemd[1]: Started Session c4 of User root. Feb 1 03:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179e7ed4ab403439e752a2c426c6db4ca9807018662c061e320fe01562a6e116/merged/etc/target supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179e7ed4ab403439e752a2c426c6db4ca9807018662c061e320fe01562a6e116/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:05:41 localhost podman[61727]: 2026-02-01 08:05:41.53625314 +0000 UTC m=+0.186749771 container init 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid) Feb 1 03:05:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:05:41 localhost podman[61727]: 2026-02-01 08:05:41.576919793 +0000 UTC m=+0.227416404 container start 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:05:41 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:05:41 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=848fbaed99314033c0982eb0cffd8af7 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 1 03:05:41 localhost systemd[1]: Started Session c5 of User root. Feb 1 03:05:41 localhost systemd[1]: session-c4.scope: Deactivated successfully. Feb 1 03:05:41 localhost systemd[1]: session-c5.scope: Deactivated successfully. Feb 1 03:05:41 localhost podman[61774]: 2026-02-01 08:05:41.673344313 +0000 UTC m=+0.083477956 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, release=1766032510, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc.) Feb 1 03:05:41 localhost kernel: Loading iSCSI transport class v2.0-870. Feb 1 03:05:41 localhost podman[61774]: 2026-02-01 08:05:41.726001551 +0000 UTC m=+0.136135254 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, config_id=tripleo_step3, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:05:41 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:05:42 localhost podman[61893]: 2026-02-01 08:05:42.106122437 +0000 UTC m=+0.103160102 container create 39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, container_name=nova_virtstoraged, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, config_id=tripleo_step3, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 1 03:05:42 localhost podman[61893]: 2026-02-01 08:05:42.053190029 +0000 UTC m=+0.050227714 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:42 localhost systemd[1]: Started libpod-conmon-39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5.scope. Feb 1 03:05:42 localhost systemd[1]: Started libcrun container. Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42956910233e56c0615893b331e8357f0bd5264eb11a7b97d46d18517d01f2f9/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42956910233e56c0615893b331e8357f0bd5264eb11a7b97d46d18517d01f2f9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42956910233e56c0615893b331e8357f0bd5264eb11a7b97d46d18517d01f2f9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42956910233e56c0615893b331e8357f0bd5264eb11a7b97d46d18517d01f2f9/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42956910233e56c0615893b331e8357f0bd5264eb11a7b97d46d18517d01f2f9/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42956910233e56c0615893b331e8357f0bd5264eb11a7b97d46d18517d01f2f9/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42956910233e56c0615893b331e8357f0bd5264eb11a7b97d46d18517d01f2f9/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost podman[61893]: 2026-02-01 08:05:42.196731474 +0000 UTC m=+0.193769149 container init 39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, container_name=nova_virtstoraged, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Feb 1 03:05:42 localhost podman[61893]: 2026-02-01 08:05:42.210807325 +0000 UTC m=+0.207845000 container start 39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, build-date=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_id=tripleo_step3, container_name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:05:42 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1296029e90a465a2201c8dc6f8be17e7 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:42 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:05:42 localhost systemd[1]: Started Session c6 of User root. Feb 1 03:05:42 localhost systemd[1]: session-c6.scope: Deactivated successfully. Feb 1 03:05:42 localhost podman[61998]: 2026-02-01 08:05:42.665534247 +0000 UTC m=+0.082975230 container create 526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, name=rhosp-rhel9/openstack-nova-libvirt, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_virtqemud, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, build-date=2026-01-12T23:31:49Z) Feb 1 03:05:42 localhost systemd[1]: Started libpod-conmon-526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70.scope. Feb 1 03:05:42 localhost podman[61998]: 2026-02-01 08:05:42.614205649 +0000 UTC m=+0.031646652 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:42 localhost systemd[1]: Started libcrun container. Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:42 localhost podman[61998]: 2026-02-01 08:05:42.736557671 +0000 UTC m=+0.153998654 container init 526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, release=1766032510, container_name=nova_virtqemud, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:05:42 localhost podman[61998]: 2026-02-01 08:05:42.745485491 +0000 UTC m=+0.162926474 container start 526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=nova_virtqemud, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1766032510, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, version=17.1.13, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 1 03:05:42 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1296029e90a465a2201c8dc6f8be17e7 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:42 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:05:42 localhost systemd[1]: Started Session c7 of User root. Feb 1 03:05:42 localhost systemd[1]: tmp-crun.B8fDCu.mount: Deactivated successfully. Feb 1 03:05:42 localhost systemd[1]: session-c7.scope: Deactivated successfully. Feb 1 03:05:43 localhost podman[62104]: 2026-02-01 08:05:43.243745636 +0000 UTC m=+0.098635920 container create 3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, config_id=tripleo_step3, container_name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, architecture=x86_64) Feb 1 03:05:43 localhost podman[62104]: 2026-02-01 08:05:43.189133436 +0000 UTC m=+0.044023800 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:43 localhost systemd[1]: Started libpod-conmon-3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac.scope. Feb 1 03:05:43 localhost systemd[1]: Started libcrun container. Feb 1 03:05:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 1 03:05:43 localhost podman[62104]: 2026-02-01 08:05:43.318956822 +0000 UTC m=+0.173847136 container init 3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, batch=17.1_20260112.1, container_name=nova_virtproxyd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git) Feb 1 03:05:43 localhost podman[62104]: 2026-02-01 08:05:43.330217294 +0000 UTC m=+0.185107608 container start 3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_id=tripleo_step3, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:31:49Z, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510) Feb 1 03:05:43 localhost python3[61002]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1296029e90a465a2201c8dc6f8be17e7 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:05:43 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:05:43 localhost systemd[1]: Started Session c8 of User root. Feb 1 03:05:43 localhost systemd[1]: session-c8.scope: Deactivated successfully. Feb 1 03:05:43 localhost python3[62183]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:44 localhost python3[62199]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:44 localhost python3[62215]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:44 localhost python3[62231]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:44 localhost python3[62247]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:45 localhost python3[62263]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:45 localhost python3[62279]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:45 localhost python3[62295]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:46 localhost python3[62311]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:46 localhost python3[62328]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:46 localhost python3[62344]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:46 localhost python3[62360]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:47 localhost python3[62376]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:47 localhost python3[62392]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:47 localhost python3[62408]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:47 localhost python3[62424]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:48 localhost python3[62440]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:48 localhost python3[62456]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:05:48 localhost python3[62517]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933148.4357355-99659-3428140139111/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:49 localhost python3[62546]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933148.4357355-99659-3428140139111/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:50 localhost python3[62575]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933148.4357355-99659-3428140139111/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:50 localhost python3[62604]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933148.4357355-99659-3428140139111/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:51 localhost python3[62633]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933148.4357355-99659-3428140139111/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:51 localhost python3[62662]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933148.4357355-99659-3428140139111/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:53 localhost python3[62691]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933148.4357355-99659-3428140139111/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:53 localhost systemd[1]: Stopping User Manager for UID 0... Feb 1 03:05:53 localhost systemd[61307]: Activating special unit Exit the Session... Feb 1 03:05:53 localhost systemd[61307]: Stopped target Main User Target. Feb 1 03:05:53 localhost systemd[61307]: Stopped target Basic System. Feb 1 03:05:53 localhost systemd[61307]: Stopped target Paths. Feb 1 03:05:53 localhost systemd[61307]: Stopped target Sockets. Feb 1 03:05:53 localhost systemd[61307]: Stopped target Timers. Feb 1 03:05:53 localhost systemd[61307]: Stopped Daily Cleanup of User's Temporary Directories. Feb 1 03:05:53 localhost systemd[61307]: Closed D-Bus User Message Bus Socket. Feb 1 03:05:53 localhost systemd[61307]: Stopped Create User's Volatile Files and Directories. Feb 1 03:05:53 localhost systemd[61307]: Removed slice User Application Slice. Feb 1 03:05:53 localhost systemd[61307]: Reached target Shutdown. Feb 1 03:05:53 localhost systemd[61307]: Finished Exit the Session. Feb 1 03:05:53 localhost systemd[61307]: Reached target Exit the Session. Feb 1 03:05:53 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 1 03:05:53 localhost systemd[1]: Stopped User Manager for UID 0. Feb 1 03:05:53 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 1 03:05:53 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 1 03:05:53 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 1 03:05:53 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 1 03:05:53 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 1 03:05:53 localhost python3[62720]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933148.4357355-99659-3428140139111/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:54 localhost python3[62753]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933148.4357355-99659-3428140139111/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:05:54 localhost python3[62769]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 03:05:54 localhost systemd[1]: Reloading. Feb 1 03:05:54 localhost systemd-rc-local-generator[62794]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:05:54 localhost systemd-sysv-generator[62799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:05:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:05:55 localhost python3[62821]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:05:55 localhost systemd[1]: Reloading. Feb 1 03:05:55 localhost systemd-rc-local-generator[62851]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:05:55 localhost systemd-sysv-generator[62854]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:05:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:05:55 localhost systemd[1]: Starting collectd container... Feb 1 03:05:55 localhost systemd[1]: Started collectd container. Feb 1 03:05:56 localhost python3[62888]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:05:56 localhost systemd[1]: Reloading. Feb 1 03:05:56 localhost systemd-sysv-generator[62921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:05:56 localhost systemd-rc-local-generator[62916]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:05:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:05:56 localhost systemd[1]: Starting iscsid container... Feb 1 03:05:56 localhost systemd[1]: Started iscsid container. Feb 1 03:05:57 localhost python3[62955]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:05:58 localhost systemd[1]: Reloading. Feb 1 03:05:58 localhost systemd-sysv-generator[62986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:05:58 localhost systemd-rc-local-generator[62982]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:05:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:05:58 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Feb 1 03:05:58 localhost systemd[1]: Started nova_virtlogd_wrapper container. Feb 1 03:05:59 localhost python3[63023]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:05:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:05:59 localhost podman[63025]: 2026-02-01 08:05:59.857383858 +0000 UTC m=+0.074421232 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, architecture=x86_64) Feb 1 03:06:00 localhost podman[63025]: 2026-02-01 08:06:00.038528411 +0000 UTC m=+0.255565865 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git) Feb 1 03:06:00 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:06:00 localhost systemd[1]: Reloading. Feb 1 03:06:00 localhost systemd-rc-local-generator[63086]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:00 localhost systemd-sysv-generator[63089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:00 localhost systemd[1]: Starting nova_virtnodedevd container... Feb 1 03:06:00 localhost tripleo-start-podman-container[63092]: Creating additional drop-in dependency for "nova_virtnodedevd" (883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff) Feb 1 03:06:00 localhost systemd[1]: Reloading. Feb 1 03:06:00 localhost systemd-sysv-generator[63153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:00 localhost systemd-rc-local-generator[63148]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:01 localhost systemd[1]: Started nova_virtnodedevd container. Feb 1 03:06:01 localhost python3[63175]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:06:01 localhost systemd[1]: Reloading. Feb 1 03:06:01 localhost systemd-sysv-generator[63207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:01 localhost systemd-rc-local-generator[63202]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:02 localhost systemd[1]: Starting nova_virtproxyd container... Feb 1 03:06:02 localhost tripleo-start-podman-container[63215]: Creating additional drop-in dependency for "nova_virtproxyd" (3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac) Feb 1 03:06:02 localhost systemd[1]: Reloading. Feb 1 03:06:02 localhost systemd-sysv-generator[63277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:02 localhost systemd-rc-local-generator[63271]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:02 localhost systemd[1]: Started nova_virtproxyd container. Feb 1 03:06:03 localhost python3[63299]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:06:03 localhost systemd[1]: Reloading. Feb 1 03:06:03 localhost systemd-rc-local-generator[63329]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:03 localhost systemd-sysv-generator[63332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:03 localhost systemd[1]: Starting nova_virtqemud container... Feb 1 03:06:03 localhost tripleo-start-podman-container[63339]: Creating additional drop-in dependency for "nova_virtqemud" (526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70) Feb 1 03:06:03 localhost systemd[1]: Reloading. Feb 1 03:06:03 localhost systemd-rc-local-generator[63395]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:03 localhost systemd-sysv-generator[63399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:04 localhost systemd[1]: Started nova_virtqemud container. Feb 1 03:06:04 localhost python3[63424]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:06:04 localhost systemd[1]: Reloading. Feb 1 03:06:04 localhost systemd-sysv-generator[63454]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:04 localhost systemd-rc-local-generator[63451]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:04 localhost systemd[1]: Starting nova_virtsecretd container... Feb 1 03:06:05 localhost tripleo-start-podman-container[63464]: Creating additional drop-in dependency for "nova_virtsecretd" (a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3) Feb 1 03:06:05 localhost systemd[1]: Reloading. Feb 1 03:06:05 localhost systemd-sysv-generator[63528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:05 localhost systemd-rc-local-generator[63523]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:05 localhost systemd[1]: Started nova_virtsecretd container. Feb 1 03:06:05 localhost python3[63550]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:06:06 localhost systemd[1]: Reloading. Feb 1 03:06:06 localhost systemd-sysv-generator[63578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:06 localhost systemd-rc-local-generator[63573]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:06 localhost systemd[1]: Starting nova_virtstoraged container... Feb 1 03:06:06 localhost tripleo-start-podman-container[63590]: Creating additional drop-in dependency for "nova_virtstoraged" (39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5) Feb 1 03:06:06 localhost systemd[1]: Reloading. Feb 1 03:06:06 localhost systemd-rc-local-generator[63645]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:06 localhost systemd-sysv-generator[63649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:06 localhost systemd[1]: Started nova_virtstoraged container. Feb 1 03:06:07 localhost python3[63674]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:06:07 localhost systemd[1]: Reloading. Feb 1 03:06:07 localhost systemd-rc-local-generator[63700]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:06:07 localhost systemd-sysv-generator[63706]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:06:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:06:07 localhost systemd[1]: Starting rsyslog container... Feb 1 03:06:08 localhost systemd[1]: Started libcrun container. Feb 1 03:06:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:08 localhost podman[63714]: 2026-02-01 08:06:08.036129448 +0000 UTC m=+0.119578417 container init 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.openshift.expose-services=, tcib_managed=true, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z) Feb 1 03:06:08 localhost podman[63714]: 2026-02-01 08:06:08.046365669 +0000 UTC m=+0.129814648 container start 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:09Z, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13) Feb 1 03:06:08 localhost podman[63714]: rsyslog Feb 1 03:06:08 localhost systemd[1]: Started rsyslog container. Feb 1 03:06:08 localhost systemd[1]: libpod-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893.scope: Deactivated successfully. Feb 1 03:06:08 localhost podman[63747]: 2026-02-01 08:06:08.207859546 +0000 UTC m=+0.050464451 container died 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=rsyslog) Feb 1 03:06:08 localhost podman[63747]: 2026-02-01 08:06:08.23032886 +0000 UTC m=+0.072933695 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, container_name=rsyslog, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, distribution-scope=public, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:09Z, release=1766032510, url=https://www.redhat.com, config_id=tripleo_step3, batch=17.1_20260112.1) Feb 1 03:06:08 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:06:08 localhost podman[63763]: 2026-02-01 08:06:08.315150217 +0000 UTC m=+0.059902818 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible) Feb 1 03:06:08 localhost podman[63763]: rsyslog Feb 1 03:06:08 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 1 03:06:08 localhost python3[63790]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:06:08 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Feb 1 03:06:08 localhost systemd[1]: Stopped rsyslog container. Feb 1 03:06:08 localhost systemd[1]: Starting rsyslog container... Feb 1 03:06:08 localhost systemd[1]: Started libcrun container. Feb 1 03:06:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:08 localhost podman[63791]: 2026-02-01 08:06:08.625046832 +0000 UTC m=+0.090947319 container init 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, config_id=tripleo_step3, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container) Feb 1 03:06:08 localhost podman[63791]: 2026-02-01 08:06:08.633336052 +0000 UTC m=+0.099236529 container start 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.created=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510) Feb 1 03:06:08 localhost podman[63791]: rsyslog Feb 1 03:06:08 localhost systemd[1]: Started rsyslog container. Feb 1 03:06:08 localhost systemd[1]: libpod-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893.scope: Deactivated successfully. Feb 1 03:06:08 localhost podman[63814]: 2026-02-01 08:06:08.784086183 +0000 UTC m=+0.049561503 container died 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp-rhel9/openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, io.openshift.expose-services=) Feb 1 03:06:08 localhost podman[63814]: 2026-02-01 08:06:08.808852649 +0000 UTC m=+0.074327929 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-rsyslog, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, architecture=x86_64, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:06:08 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:06:08 localhost podman[63841]: 2026-02-01 08:06:08.893634554 +0000 UTC m=+0.058222675 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:09Z, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=rsyslog, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:06:08 localhost podman[63841]: rsyslog Feb 1 03:06:08 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 1 03:06:08 localhost systemd[1]: var-lib-containers-storage-overlay-dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0-merged.mount: Deactivated successfully. Feb 1 03:06:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893-userdata-shm.mount: Deactivated successfully. Feb 1 03:06:09 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Feb 1 03:06:09 localhost systemd[1]: Stopped rsyslog container. Feb 1 03:06:09 localhost systemd[1]: Starting rsyslog container... Feb 1 03:06:09 localhost systemd[1]: Started libcrun container. Feb 1 03:06:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:09 localhost podman[63886]: 2026-02-01 08:06:09.159310025 +0000 UTC m=+0.116575833 container init 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:09Z, name=rhosp-rhel9/openstack-rsyslog, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, vcs-type=git) Feb 1 03:06:09 localhost podman[63886]: 2026-02-01 08:06:09.165518119 +0000 UTC m=+0.122783957 container start 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-rsyslog, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog) Feb 1 03:06:09 localhost podman[63886]: rsyslog Feb 1 03:06:09 localhost systemd[1]: Started rsyslog container. Feb 1 03:06:09 localhost systemd[1]: libpod-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893.scope: Deactivated successfully. Feb 1 03:06:09 localhost podman[63929]: 2026-02-01 08:06:09.298197135 +0000 UTC m=+0.032626403 container died 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.created=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, config_id=tripleo_step3, build-date=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-rsyslog, architecture=x86_64, release=1766032510) Feb 1 03:06:09 localhost podman[63929]: 2026-02-01 08:06:09.318778939 +0000 UTC m=+0.053208177 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:06:09 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:06:09 localhost podman[63948]: 2026-02-01 08:06:09.405674451 +0000 UTC m=+0.058722961 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:09Z, architecture=x86_64, io.buildah.version=1.41.5, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, tcib_managed=true, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=rsyslog, version=17.1.13) Feb 1 03:06:09 localhost podman[63948]: rsyslog Feb 1 03:06:09 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 1 03:06:09 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Feb 1 03:06:09 localhost systemd[1]: Stopped rsyslog container. Feb 1 03:06:09 localhost systemd[1]: Starting rsyslog container... Feb 1 03:06:09 localhost systemd[1]: Started libcrun container. Feb 1 03:06:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:09 localhost podman[64004]: 2026-02-01 08:06:09.898859167 +0000 UTC m=+0.107208139 container init 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T22:10:09Z, distribution-scope=public, release=1766032510, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, container_name=rsyslog, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:06:09 localhost podman[64004]: 2026-02-01 08:06:09.910057277 +0000 UTC m=+0.118406239 container start 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, container_name=rsyslog, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-rsyslog, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public) Feb 1 03:06:09 localhost podman[64004]: rsyslog Feb 1 03:06:09 localhost systemd[1]: Started rsyslog container. Feb 1 03:06:09 localhost python3[64015]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005604215 step=3 update_config_hash_only=False Feb 1 03:06:10 localhost systemd[1]: libpod-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893.scope: Deactivated successfully. Feb 1 03:06:10 localhost podman[64029]: 2026-02-01 08:06:10.041012309 +0000 UTC m=+0.030010931 container died 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp-rhel9/openstack-rsyslog, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:09Z) Feb 1 03:06:10 localhost systemd[1]: tmp-crun.Y1zWfX.mount: Deactivated successfully. Feb 1 03:06:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893-userdata-shm.mount: Deactivated successfully. Feb 1 03:06:10 localhost podman[64029]: 2026-02-01 08:06:10.086410771 +0000 UTC m=+0.075409413 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=rsyslog, build-date=2026-01-12T22:10:09Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:09Z) Feb 1 03:06:10 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:06:10 localhost podman[64043]: 2026-02-01 08:06:10.174473128 +0000 UTC m=+0.056120308 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, build-date=2026-01-12T22:10:09Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, vcs-type=git, container_name=rsyslog, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Feb 1 03:06:10 localhost podman[64043]: rsyslog Feb 1 03:06:10 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 1 03:06:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:06:10 localhost podman[64055]: 2026-02-01 08:06:10.284914178 +0000 UTC m=+0.083496256 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, container_name=collectd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Feb 1 03:06:10 localhost podman[64055]: 2026-02-01 08:06:10.300695201 +0000 UTC m=+0.099277259 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd) Feb 1 03:06:10 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Feb 1 03:06:10 localhost systemd[1]: Stopped rsyslog container. Feb 1 03:06:10 localhost systemd[1]: Starting rsyslog container... Feb 1 03:06:10 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:06:10 localhost systemd[1]: Started libcrun container. Feb 1 03:06:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 1 03:06:10 localhost podman[64075]: 2026-02-01 08:06:10.451383071 +0000 UTC m=+0.113429943 container init 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, container_name=rsyslog, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-rsyslog, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Feb 1 03:06:10 localhost podman[64075]: 2026-02-01 08:06:10.460630951 +0000 UTC m=+0.122677823 container start 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=rsyslog, io.openshift.expose-services=, build-date=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-rsyslog, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:09Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Feb 1 03:06:10 localhost podman[64075]: rsyslog Feb 1 03:06:10 localhost systemd[1]: Started rsyslog container. Feb 1 03:06:10 localhost systemd[1]: libpod-11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893.scope: Deactivated successfully. Feb 1 03:06:10 localhost python3[64104]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:06:10 localhost podman[64111]: 2026-02-01 08:06:10.641461994 +0000 UTC m=+0.073907895 container died 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, version=17.1.13, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510) Feb 1 03:06:10 localhost podman[64111]: 2026-02-01 08:06:10.66111117 +0000 UTC m=+0.093557041 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., release=1766032510, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, build-date=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=rsyslog, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, tcib_managed=true, name=rhosp-rhel9/openstack-rsyslog, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git) Feb 1 03:06:10 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:06:10 localhost podman[64124]: 2026-02-01 08:06:10.752789941 +0000 UTC m=+0.058185323 container cleanup 11539428b9cd8a9a78f770d67692d57fdd3335d9c7969cf7a5a0f92decd1e893 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, distribution-scope=public, tcib_managed=true, container_name=rsyslog, version=17.1.13, build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '52a7bad153b9a3530edb4c6869c1fe7c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:06:10 localhost podman[64124]: rsyslog Feb 1 03:06:10 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 1 03:06:10 localhost systemd[1]: var-lib-containers-storage-overlay-dccddcf2ee10c6337ca051732a7fa75bba9a539cf44a7a4bc7d14c12cf4b5db0-merged.mount: Deactivated successfully. Feb 1 03:06:10 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Feb 1 03:06:10 localhost systemd[1]: Stopped rsyslog container. Feb 1 03:06:10 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Feb 1 03:06:10 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 1 03:06:10 localhost systemd[1]: Failed to start rsyslog container. Feb 1 03:06:11 localhost python3[64152]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 1 03:06:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:06:11 localhost podman[64153]: 2026-02-01 08:06:11.858605774 +0000 UTC m=+0.074771893 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 1 03:06:11 localhost podman[64153]: 2026-02-01 08:06:11.872698996 +0000 UTC m=+0.088865135 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.13, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z) Feb 1 03:06:11 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:06:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:06:30 localhost systemd[1]: tmp-crun.pEXJrP.mount: Deactivated successfully. Feb 1 03:06:30 localhost podman[64249]: 2026-02-01 08:06:30.863353912 +0000 UTC m=+0.082665951 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-type=git, config_id=tripleo_step1) Feb 1 03:06:31 localhost podman[64249]: 2026-02-01 08:06:31.078911443 +0000 UTC m=+0.298223452 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 1 03:06:31 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:06:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:06:40 localhost podman[64278]: 2026-02-01 08:06:40.860006395 +0000 UTC m=+0.077687314 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Feb 1 03:06:40 localhost podman[64278]: 2026-02-01 08:06:40.868735658 +0000 UTC m=+0.086416557 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 1 03:06:40 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:06:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:06:42 localhost systemd[1]: tmp-crun.DzxCa5.mount: Deactivated successfully. Feb 1 03:06:42 localhost podman[64299]: 2026-02-01 08:06:42.865481304 +0000 UTC m=+0.080933426 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2026-01-12T22:34:43Z) Feb 1 03:06:42 localhost podman[64299]: 2026-02-01 08:06:42.879627247 +0000 UTC m=+0.095079349 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, io.openshift.expose-services=, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 1 03:06:42 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:07:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:07:01 localhost podman[64318]: 2026-02-01 08:07:01.861983566 +0000 UTC m=+0.080896105 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., config_id=tripleo_step1, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1766032510, io.buildah.version=1.41.5, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:07:02 localhost podman[64318]: 2026-02-01 08:07:02.063818667 +0000 UTC m=+0.282731256 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, release=1766032510, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:07:02 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:07:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:07:11 localhost systemd[1]: tmp-crun.1bCTto.mount: Deactivated successfully. Feb 1 03:07:11 localhost podman[64347]: 2026-02-01 08:07:11.875653884 +0000 UTC m=+0.088516844 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container) Feb 1 03:07:11 localhost podman[64347]: 2026-02-01 08:07:11.91479675 +0000 UTC m=+0.127659730 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd) Feb 1 03:07:11 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:07:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:07:13 localhost podman[64369]: 2026-02-01 08:07:13.866482744 +0000 UTC m=+0.082688200 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:07:13 localhost podman[64369]: 2026-02-01 08:07:13.87497218 +0000 UTC m=+0.091177596 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:07:13 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:07:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:07:32 localhost podman[64464]: 2026-02-01 08:07:32.866379412 +0000 UTC m=+0.080836013 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 1 03:07:33 localhost podman[64464]: 2026-02-01 08:07:33.05664721 +0000 UTC m=+0.271103771 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, container_name=metrics_qdr, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:07:33 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:07:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:07:42 localhost systemd[1]: tmp-crun.qfVPv1.mount: Deactivated successfully. Feb 1 03:07:42 localhost podman[64493]: 2026-02-01 08:07:42.866915188 +0000 UTC m=+0.086910432 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, release=1766032510, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:07:42 localhost podman[64493]: 2026-02-01 08:07:42.876246181 +0000 UTC m=+0.096241475 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1766032510, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:07:42 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:07:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:07:44 localhost systemd[1]: tmp-crun.IVH0VS.mount: Deactivated successfully. Feb 1 03:07:44 localhost podman[64513]: 2026-02-01 08:07:44.866764722 +0000 UTC m=+0.082749943 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, config_id=tripleo_step3, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, tcib_managed=true) Feb 1 03:07:44 localhost podman[64513]: 2026-02-01 08:07:44.90470392 +0000 UTC m=+0.120689151 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T22:34:43Z) Feb 1 03:07:44 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:08:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:08:03 localhost systemd[1]: tmp-crun.0xpmL8.mount: Deactivated successfully. Feb 1 03:08:03 localhost podman[64531]: 2026-02-01 08:08:03.862859599 +0000 UTC m=+0.079399827 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 03:08:04 localhost podman[64531]: 2026-02-01 08:08:04.057759054 +0000 UTC m=+0.274299282 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, config_id=tripleo_step1, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 1 03:08:04 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:08:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:08:13 localhost podman[64560]: 2026-02-01 08:08:13.862257929 +0000 UTC m=+0.078435617 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:08:13 localhost podman[64560]: 2026-02-01 08:08:13.87091151 +0000 UTC m=+0.087089268 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13) Feb 1 03:08:13 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:08:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:08:15 localhost systemd[1]: tmp-crun.WXcbD0.mount: Deactivated successfully. Feb 1 03:08:15 localhost podman[64580]: 2026-02-01 08:08:15.865405326 +0000 UTC m=+0.081221694 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510) Feb 1 03:08:15 localhost podman[64580]: 2026-02-01 08:08:15.874656536 +0000 UTC m=+0.090472924 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, version=17.1.13) Feb 1 03:08:15 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:08:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:08:34 localhost systemd[1]: tmp-crun.PA9IEb.mount: Deactivated successfully. Feb 1 03:08:34 localhost podman[64673]: 2026-02-01 08:08:34.873332545 +0000 UTC m=+0.089787354 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:08:35 localhost podman[64673]: 2026-02-01 08:08:35.042557764 +0000 UTC m=+0.259012483 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, version=17.1.13, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:08:35 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:08:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:08:44 localhost podman[64702]: 2026-02-01 08:08:44.871586159 +0000 UTC m=+0.083716063 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Feb 1 03:08:44 localhost podman[64702]: 2026-02-01 08:08:44.888681544 +0000 UTC m=+0.100811488 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=collectd) Feb 1 03:08:44 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:08:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:08:46 localhost podman[64722]: 2026-02-01 08:08:46.863880295 +0000 UTC m=+0.082815125 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, build-date=2026-01-12T22:34:43Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5) Feb 1 03:08:46 localhost podman[64722]: 2026-02-01 08:08:46.902632819 +0000 UTC m=+0.121567629 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, version=17.1.13, vendor=Red Hat, Inc., container_name=iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:08:46 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:09:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:09:05 localhost podman[64741]: 2026-02-01 08:09:05.879252446 +0000 UTC m=+0.091454255 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 03:09:06 localhost podman[64741]: 2026-02-01 08:09:06.070164545 +0000 UTC m=+0.282366334 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 1 03:09:06 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:09:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:09:15 localhost systemd[1]: tmp-crun.ItcxUi.mount: Deactivated successfully. Feb 1 03:09:15 localhost podman[64770]: 2026-02-01 08:09:15.873236926 +0000 UTC m=+0.086416897 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, distribution-scope=public, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, com.redhat.component=openstack-collectd-container) Feb 1 03:09:15 localhost podman[64770]: 2026-02-01 08:09:15.881619619 +0000 UTC m=+0.094799590 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5) Feb 1 03:09:15 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:09:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:09:17 localhost podman[64790]: 2026-02-01 08:09:17.864937535 +0000 UTC m=+0.080778091 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, url=https://www.redhat.com) Feb 1 03:09:17 localhost podman[64790]: 2026-02-01 08:09:17.895367098 +0000 UTC m=+0.111207624 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, url=https://www.redhat.com, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13) Feb 1 03:09:17 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:09:36 localhost podman[64887]: 2026-02-01 08:09:36.872843847 +0000 UTC m=+0.085889577 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510) Feb 1 03:09:37 localhost podman[64887]: 2026-02-01 08:09:37.054493147 +0000 UTC m=+0.267538847 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1766032510, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Feb 1 03:09:37 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:09:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:09:46 localhost systemd[1]: tmp-crun.NvkLYr.mount: Deactivated successfully. Feb 1 03:09:46 localhost podman[64917]: 2026-02-01 08:09:46.881559696 +0000 UTC m=+0.093473960 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:09:46 localhost podman[64917]: 2026-02-01 08:09:46.917803845 +0000 UTC m=+0.129718159 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., vcs-type=git, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20260112.1) Feb 1 03:09:46 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:09:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:09:48 localhost systemd[1]: tmp-crun.Gese5p.mount: Deactivated successfully. Feb 1 03:09:48 localhost podman[64938]: 2026-02-01 08:09:48.871925662 +0000 UTC m=+0.084116802 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container) Feb 1 03:09:48 localhost podman[64938]: 2026-02-01 08:09:48.886760476 +0000 UTC m=+0.098951586 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z) Feb 1 03:09:48 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:10:04 localhost python3[65006]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:04 localhost python3[65051]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933403.9138386-106831-103831838187883/source _original_basename=tmpoxx7rek_ follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:05 localhost python3[65113]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:05 localhost python3[65156]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933405.2245994-106907-141639186815934/source _original_basename=tmpdxsbcle5 follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:06 localhost python3[65218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:06 localhost python3[65261]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933406.1655345-106959-280975021446585/source _original_basename=tmphfovkidy follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:10:07 localhost systemd[1]: tmp-crun.28GsRV.mount: Deactivated successfully. Feb 1 03:10:07 localhost podman[65324]: 2026-02-01 08:10:07.46891553 +0000 UTC m=+0.124163462 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=) Feb 1 03:10:07 localhost python3[65323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:07 localhost podman[65324]: 2026-02-01 08:10:07.63058098 +0000 UTC m=+0.285828912 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com) Feb 1 03:10:07 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:10:07 localhost python3[65395]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933407.1606317-107019-216220168701269/source _original_basename=tmpr5cxhs_j follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:08 localhost python3[65425]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 1 03:10:08 localhost systemd[1]: Reloading. Feb 1 03:10:08 localhost systemd-sysv-generator[65452]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:08 localhost systemd-rc-local-generator[65449]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:08 localhost systemd[1]: Reloading. Feb 1 03:10:08 localhost systemd-sysv-generator[65492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:08 localhost systemd-rc-local-generator[65485]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:09 localhost python3[65515]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:10:09 localhost systemd[1]: Reloading. Feb 1 03:10:09 localhost systemd-rc-local-generator[65536]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:09 localhost systemd-sysv-generator[65542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:09 localhost systemd[1]: Reloading. Feb 1 03:10:09 localhost systemd-rc-local-generator[65581]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:09 localhost systemd-sysv-generator[65586]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:10 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Feb 1 03:10:10 localhost python3[65607]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 03:10:10 localhost systemd[1]: Reloading. Feb 1 03:10:10 localhost systemd-rc-local-generator[65630]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:10 localhost systemd-sysv-generator[65634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:11 localhost python3[65691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:11 localhost python3[65734]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933410.87873-107161-190547566216215/source _original_basename=tmpi7so8qs_ follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:12 localhost python3[65764]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:10:12 localhost systemd[1]: Reloading. Feb 1 03:10:12 localhost systemd-rc-local-generator[65788]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:12 localhost systemd-sysv-generator[65793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:12 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Feb 1 03:10:12 localhost python3[65818]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:10:14 localhost ansible-async_wrapper.py[65990]: Invoked with 568478358014 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933413.8575816-107271-33031830807480/AnsiballZ_command.py _ Feb 1 03:10:14 localhost ansible-async_wrapper.py[65993]: Starting module and watcher Feb 1 03:10:14 localhost ansible-async_wrapper.py[65993]: Start watching 65994 (3600) Feb 1 03:10:14 localhost ansible-async_wrapper.py[65994]: Start module (65994) Feb 1 03:10:14 localhost ansible-async_wrapper.py[65990]: Return async_wrapper task started. Feb 1 03:10:14 localhost python3[66014]: ansible-ansible.legacy.async_status Invoked with jid=568478358014.65990 mode=status _async_dir=/tmp/.ansible_async Feb 1 03:10:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:10:17 localhost podman[66069]: 2026-02-01 08:10:17.151923362 +0000 UTC m=+0.104891947 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, tcib_managed=true) Feb 1 03:10:17 localhost podman[66069]: 2026-02-01 08:10:17.167607913 +0000 UTC m=+0.120576528 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, vendor=Red Hat, Inc.) Feb 1 03:10:17 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:10:17 localhost puppet-user[66012]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 03:10:17 localhost puppet-user[66012]: (file: /etc/puppet/hiera.yaml) Feb 1 03:10:17 localhost puppet-user[66012]: Warning: Undefined variable '::deploy_config_name'; Feb 1 03:10:17 localhost puppet-user[66012]: (file & line not available) Feb 1 03:10:18 localhost puppet-user[66012]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 03:10:18 localhost puppet-user[66012]: (file & line not available) Feb 1 03:10:18 localhost puppet-user[66012]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 1 03:10:18 localhost puppet-user[66012]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:10:18 localhost puppet-user[66012]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:10:18 localhost puppet-user[66012]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:10:18 localhost puppet-user[66012]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:10:18 localhost puppet-user[66012]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:10:18 localhost puppet-user[66012]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:10:18 localhost puppet-user[66012]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:10:18 localhost puppet-user[66012]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:10:18 localhost puppet-user[66012]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:10:18 localhost puppet-user[66012]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:10:18 localhost puppet-user[66012]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:10:18 localhost puppet-user[66012]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:10:18 localhost puppet-user[66012]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:10:18 localhost puppet-user[66012]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:10:18 localhost puppet-user[66012]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:10:18 localhost puppet-user[66012]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:10:18 localhost puppet-user[66012]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:10:18 localhost puppet-user[66012]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 1 03:10:18 localhost puppet-user[66012]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.23 seconds Feb 1 03:10:19 localhost ansible-async_wrapper.py[65993]: 65994 still running (3600) Feb 1 03:10:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:10:19 localhost podman[66152]: 2026-02-01 08:10:19.884123643 +0000 UTC m=+0.088072538 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13) Feb 1 03:10:19 localhost podman[66152]: 2026-02-01 08:10:19.89562095 +0000 UTC m=+0.099569845 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64) Feb 1 03:10:19 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:10:24 localhost ansible-async_wrapper.py[65993]: 65994 still running (3595) Feb 1 03:10:25 localhost python3[66245]: ansible-ansible.legacy.async_status Invoked with jid=568478358014.65990 mode=status _async_dir=/tmp/.ansible_async Feb 1 03:10:26 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 03:10:26 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 03:10:26 localhost systemd[1]: Reloading. Feb 1 03:10:27 localhost systemd-rc-local-generator[66315]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:27 localhost systemd-sysv-generator[66319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:27 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 03:10:27 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 03:10:27 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 03:10:27 localhost systemd[1]: run-rd1bbb4b69ea4423a9f8628b2a1d8cc41.service: Deactivated successfully. Feb 1 03:10:28 localhost puppet-user[66012]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Feb 1 03:10:28 localhost puppet-user[66012]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}330d40e6e8b4d501b2c8ea095074c170cae10764538f74f185579138e5e168c8' Feb 1 03:10:28 localhost puppet-user[66012]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Feb 1 03:10:28 localhost puppet-user[66012]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Feb 1 03:10:28 localhost puppet-user[66012]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Feb 1 03:10:28 localhost puppet-user[66012]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Feb 1 03:10:29 localhost ansible-async_wrapper.py[65993]: 65994 still running (3590) Feb 1 03:10:33 localhost puppet-user[66012]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Feb 1 03:10:33 localhost systemd[1]: Reloading. Feb 1 03:10:33 localhost systemd-sysv-generator[67744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:33 localhost systemd-rc-local-generator[67740]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:34 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Feb 1 03:10:34 localhost snmpd[67757]: Can't find directory of RPM packages Feb 1 03:10:34 localhost snmpd[67757]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Feb 1 03:10:34 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Feb 1 03:10:34 localhost systemd[1]: Reloading. Feb 1 03:10:34 localhost systemd-sysv-generator[67783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:34 localhost systemd-rc-local-generator[67779]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:34 localhost ansible-async_wrapper.py[65993]: 65994 still running (3585) Feb 1 03:10:34 localhost systemd[1]: Reloading. Feb 1 03:10:34 localhost systemd-sysv-generator[67822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:34 localhost systemd-rc-local-generator[67818]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:34 localhost puppet-user[66012]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Feb 1 03:10:34 localhost puppet-user[66012]: Notice: Applied catalog in 16.64 seconds Feb 1 03:10:34 localhost puppet-user[66012]: Application: Feb 1 03:10:34 localhost puppet-user[66012]: Initial environment: production Feb 1 03:10:34 localhost puppet-user[66012]: Converged environment: production Feb 1 03:10:34 localhost puppet-user[66012]: Run mode: user Feb 1 03:10:34 localhost puppet-user[66012]: Changes: Feb 1 03:10:34 localhost puppet-user[66012]: Total: 8 Feb 1 03:10:34 localhost puppet-user[66012]: Events: Feb 1 03:10:34 localhost puppet-user[66012]: Success: 8 Feb 1 03:10:34 localhost puppet-user[66012]: Total: 8 Feb 1 03:10:34 localhost puppet-user[66012]: Resources: Feb 1 03:10:34 localhost puppet-user[66012]: Restarted: 1 Feb 1 03:10:34 localhost puppet-user[66012]: Changed: 8 Feb 1 03:10:34 localhost puppet-user[66012]: Out of sync: 8 Feb 1 03:10:34 localhost puppet-user[66012]: Total: 19 Feb 1 03:10:34 localhost puppet-user[66012]: Time: Feb 1 03:10:34 localhost puppet-user[66012]: Filebucket: 0.00 Feb 1 03:10:34 localhost puppet-user[66012]: Schedule: 0.00 Feb 1 03:10:34 localhost puppet-user[66012]: Augeas: 0.01 Feb 1 03:10:34 localhost puppet-user[66012]: File: 0.08 Feb 1 03:10:34 localhost puppet-user[66012]: Config retrieval: 0.29 Feb 1 03:10:34 localhost puppet-user[66012]: Service: 1.22 Feb 1 03:10:34 localhost puppet-user[66012]: Package: 10.09 Feb 1 03:10:34 localhost puppet-user[66012]: Transaction evaluation: 16.63 Feb 1 03:10:34 localhost puppet-user[66012]: Catalog application: 16.64 Feb 1 03:10:34 localhost puppet-user[66012]: Last run: 1769933434 Feb 1 03:10:34 localhost puppet-user[66012]: Exec: 5.05 Feb 1 03:10:34 localhost puppet-user[66012]: Total: 16.64 Feb 1 03:10:34 localhost puppet-user[66012]: Version: Feb 1 03:10:34 localhost puppet-user[66012]: Config: 1769933417 Feb 1 03:10:34 localhost puppet-user[66012]: Puppet: 7.10.0 Feb 1 03:10:34 localhost ansible-async_wrapper.py[65994]: Module complete (65994) Feb 1 03:10:35 localhost python3[67846]: ansible-ansible.legacy.async_status Invoked with jid=568478358014.65990 mode=status _async_dir=/tmp/.ansible_async Feb 1 03:10:36 localhost python3[67877]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 03:10:36 localhost python3[67893]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:10:36 localhost python3[67943]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:37 localhost python3[67961]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpzkg0n_dc recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 03:10:37 localhost python3[67991]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:10:37 localhost podman[68007]: 2026-02-01 08:10:37.924509592 +0000 UTC m=+0.146090173 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr) Feb 1 03:10:38 localhost podman[68007]: 2026-02-01 08:10:38.124765336 +0000 UTC m=+0.346345927 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr) Feb 1 03:10:38 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:10:38 localhost python3[68123]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 1 03:10:39 localhost python3[68142]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:39 localhost ansible-async_wrapper.py[65993]: Done in kid B. Feb 1 03:10:40 localhost python3[68174]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:10:40 localhost python3[68224]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:41 localhost python3[68242]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:41 localhost python3[68304]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:41 localhost python3[68322]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:42 localhost python3[68384]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:42 localhost python3[68402]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:43 localhost python3[68464]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:43 localhost python3[68482]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:43 localhost python3[68512]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:10:43 localhost systemd[1]: Reloading. Feb 1 03:10:43 localhost systemd-rc-local-generator[68535]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:43 localhost systemd-sysv-generator[68538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:10:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4946 writes, 22K keys, 4946 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4946 writes, 558 syncs, 8.86 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 86 writes, 124 keys, 86 commit groups, 1.0 writes per commit group, ingest: 0.03 MB, 0.00 MB/s#012Interval WAL: 86 writes, 43 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:10:45 localhost python3[68598]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:45 localhost python3[68616]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:46 localhost python3[68678]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:10:46 localhost python3[68696]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:47 localhost python3[68726]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:10:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:10:47 localhost systemd[1]: Reloading. Feb 1 03:10:47 localhost systemd-rc-local-generator[68766]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:10:47 localhost systemd-sysv-generator[68769]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:10:47 localhost podman[68728]: 2026-02-01 08:10:47.412545539 +0000 UTC m=+0.092503469 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:10:47 localhost podman[68728]: 2026-02-01 08:10:47.445906976 +0000 UTC m=+0.125864906 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, architecture=x86_64, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:10:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:10:47 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:10:47 localhost systemd[1]: Starting Create netns directory... Feb 1 03:10:47 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 03:10:47 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 03:10:47 localhost systemd[1]: Finished Create netns directory. Feb 1 03:10:48 localhost python3[68803]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 1 03:10:50 localhost python3[68862]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 1 03:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:10:50 localhost systemd[1]: tmp-crun.Dhsyqv.mount: Deactivated successfully. Feb 1 03:10:50 localhost podman[68893]: 2026-02-01 08:10:50.205989069 +0000 UTC m=+0.086999733 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container) Feb 1 03:10:50 localhost podman[68893]: 2026-02-01 08:10:50.24669183 +0000 UTC m=+0.127702524 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, version=17.1.13, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, architecture=x86_64, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:10:50 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:10:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:10:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4734 writes, 21K keys, 4734 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4734 writes, 481 syncs, 9.84 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 26 writes, 51 keys, 26 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s#012Interval WAL: 26 writes, 13 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:10:50 localhost podman[69028]: 2026-02-01 08:10:50.392881705 +0000 UTC m=+0.102284671 container create 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5) Feb 1 03:10:50 localhost podman[69040]: 2026-02-01 08:10:50.416483451 +0000 UTC m=+0.109844674 container create 2b56af8b4399e7db68243b7ae3ed38a6383683d877af84d3c6702a7134a03d7a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.5, container_name=nova_libvirt_init_secret, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:10:50 localhost systemd[1]: Started libpod-conmon-07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.scope. Feb 1 03:10:50 localhost systemd[1]: Started libcrun container. Feb 1 03:10:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2793eb0d727691e97e5e2f52ec5e9822efebe0b6bf32e0fb26a5897fd53d53c/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:50 localhost systemd[1]: Started libpod-conmon-2b56af8b4399e7db68243b7ae3ed38a6383683d877af84d3c6702a7134a03d7a.scope. Feb 1 03:10:50 localhost systemd[1]: Started libcrun container. Feb 1 03:10:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901f926467172f87fed8e093a0c623b4edfdf674c0cbe61bc939afde2d57f8c6/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901f926467172f87fed8e093a0c623b4edfdf674c0cbe61bc939afde2d57f8c6/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901f926467172f87fed8e093a0c623b4edfdf674c0cbe61bc939afde2d57f8c6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:50 localhost podman[69040]: 2026-02-01 08:10:50.453914278 +0000 UTC m=+0.147275501 container init 2b56af8b4399e7db68243b7ae3ed38a6383683d877af84d3c6702a7134a03d7a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, architecture=x86_64, container_name=nova_libvirt_init_secret, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5) Feb 1 03:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:10:50 localhost podman[69028]: 2026-02-01 08:10:50.460003612 +0000 UTC m=+0.169406588 container init 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, release=1766032510, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:10:50 localhost podman[69028]: 2026-02-01 08:10:50.363203406 +0000 UTC m=+0.072606392 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 1 03:10:50 localhost podman[69040]: 2026-02-01 08:10:50.465226599 +0000 UTC m=+0.158587822 container start 2b56af8b4399e7db68243b7ae3ed38a6383683d877af84d3c6702a7134a03d7a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, name=rhosp-rhel9/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:10:50 localhost podman[69040]: 2026-02-01 08:10:50.465407475 +0000 UTC m=+0.158768708 container attach 2b56af8b4399e7db68243b7ae3ed38a6383683d877af84d3c6702a7134a03d7a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=nova_libvirt_init_secret, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:10:50 localhost podman[69040]: 2026-02-01 08:10:50.372346949 +0000 UTC m=+0.065708172 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 1 03:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:10:50 localhost podman[69072]: 2026-02-01 08:10:50.475187018 +0000 UTC m=+0.107719726 container create 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, version=17.1.13, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:10:50 localhost podman[69028]: 2026-02-01 08:10:50.478662149 +0000 UTC m=+0.188065125 container start 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vcs-type=git, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Feb 1 03:10:50 localhost python3[68862]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 1 03:10:50 localhost podman[69019]: 2026-02-01 08:10:50.386604225 +0000 UTC m=+0.091774876 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 1 03:10:50 localhost podman[69072]: 2026-02-01 08:10:50.446554093 +0000 UTC m=+0.079086831 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 1 03:10:50 localhost systemd[1]: Started libpod-conmon-79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.scope. Feb 1 03:10:50 localhost systemd[1]: libpod-2b56af8b4399e7db68243b7ae3ed38a6383683d877af84d3c6702a7134a03d7a.scope: Deactivated successfully. Feb 1 03:10:50 localhost systemd[1]: Started libcrun container. Feb 1 03:10:50 localhost podman[69040]: 2026-02-01 08:10:50.566099875 +0000 UTC m=+0.259461108 container died 2b56af8b4399e7db68243b7ae3ed38a6383683d877af84d3c6702a7134a03d7a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_libvirt_init_secret, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:10:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19867aa9ce07feb42ab4d071eed0ec581b8be5de4a737b08d8913c4970e7b3a5/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:10:50 localhost podman[69072]: 2026-02-01 08:10:50.590351561 +0000 UTC m=+0.222884269 container init 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1) Feb 1 03:10:50 localhost podman[69096]: 2026-02-01 08:10:50.492735989 +0000 UTC m=+0.086138786 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Feb 1 03:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:10:50 localhost podman[69072]: 2026-02-01 08:10:50.616965992 +0000 UTC m=+0.249498700 container start 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:10:50 localhost python3[68862]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=63e53a2f3cd2422147592f2c2c6c2f61 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 1 03:10:50 localhost podman[69096]: 2026-02-01 08:10:50.626787037 +0000 UTC m=+0.220189804 container create 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:10:50 localhost systemd[1]: Started libpod-conmon-35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.scope. Feb 1 03:10:50 localhost systemd[1]: Started libcrun container. Feb 1 03:10:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a1138cbb1c83236f4de65652beadb5bc0b1f3b8c525083bd1db3fda89ebbe0/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:50 localhost podman[69201]: 2026-02-01 08:10:50.696490506 +0000 UTC m=+0.074139233 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:10:50 localhost podman[69201]: 2026-02-01 08:10:50.706143254 +0000 UTC m=+0.083791981 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:10:50 localhost podman[69201]: unhealthy Feb 1 03:10:50 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:10:50 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed with result 'exit-code'. Feb 1 03:10:50 localhost podman[69019]: 2026-02-01 08:10:50.72822047 +0000 UTC m=+0.433391111 container create a0f2464871378e941563090e4710cf646859b636bf3611034b45ded0159cbf88 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, container_name=configure_cms_options, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:10:50 localhost systemd[1]: Started libpod-conmon-a0f2464871378e941563090e4710cf646859b636bf3611034b45ded0159cbf88.scope. Feb 1 03:10:50 localhost systemd[1]: Started libcrun container. Feb 1 03:10:50 localhost podman[69133]: 2026-02-01 08:10:50.800390799 +0000 UTC m=+0.318272750 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1) Feb 1 03:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:10:50 localhost podman[69096]: 2026-02-01 08:10:50.805616696 +0000 UTC m=+0.399019483 container init 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible) Feb 1 03:10:50 localhost podman[69133]: 2026-02-01 08:10:50.807394692 +0000 UTC m=+0.325276733 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:10:50 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:10:50 localhost podman[69019]: 2026-02-01 08:10:50.860650876 +0000 UTC m=+0.565821537 container init a0f2464871378e941563090e4710cf646859b636bf3611034b45ded0159cbf88 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=configure_cms_options, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:10:50 localhost podman[69183]: 2026-02-01 08:10:50.886952667 +0000 UTC m=+0.307754663 container cleanup 2b56af8b4399e7db68243b7ae3ed38a6383683d877af84d3c6702a7134a03d7a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, vcs-type=git, container_name=nova_libvirt_init_secret, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:10:50 localhost systemd[1]: libpod-conmon-2b56af8b4399e7db68243b7ae3ed38a6383683d877af84d3c6702a7134a03d7a.scope: Deactivated successfully. Feb 1 03:10:50 localhost podman[69019]: 2026-02-01 08:10:50.922531356 +0000 UTC m=+0.627701997 container start a0f2464871378e941563090e4710cf646859b636bf3611034b45ded0159cbf88 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=configure_cms_options, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:10:50 localhost podman[69019]: 2026-02-01 08:10:50.922765023 +0000 UTC m=+0.627935734 container attach a0f2464871378e941563090e4710cf646859b636bf3611034b45ded0159cbf88 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=configure_cms_options, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:10:50 localhost podman[69269]: 2026-02-01 08:10:50.828627772 +0000 UTC m=+0.038480692 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 1 03:10:50 localhost podman[69269]: 2026-02-01 08:10:50.974726844 +0000 UTC m=+0.184579744 container create 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13) Feb 1 03:10:50 localhost ovs-vsctl[69326]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Feb 1 03:10:51 localhost podman[69096]: 2026-02-01 08:10:51.002919306 +0000 UTC m=+0.596322103 container start 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 1 03:10:51 localhost python3[68862]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=63e53a2f3cd2422147592f2c2c6c2f61 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Feb 1 03:10:51 localhost systemd[1]: Started libpod-conmon-080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.scope. Feb 1 03:10:51 localhost systemd[1]: libpod-a0f2464871378e941563090e4710cf646859b636bf3611034b45ded0159cbf88.scope: Deactivated successfully. Feb 1 03:10:51 localhost systemd[1]: Started libcrun container. Feb 1 03:10:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb1968646de61e5d6c5b7938dce54da276edc06f0bc75651b588722ba09cba1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:51 localhost podman[69297]: 2026-02-01 08:10:51.041421807 +0000 UTC m=+0.193929663 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64) Feb 1 03:10:51 localhost podman[69297]: 2026-02-01 08:10:51.053347239 +0000 UTC m=+0.205855075 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, tcib_managed=true, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13) Feb 1 03:10:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:10:51 localhost podman[69297]: unhealthy Feb 1 03:10:51 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:10:51 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Failed with result 'exit-code'. Feb 1 03:10:51 localhost podman[69269]: 2026-02-01 08:10:51.063025559 +0000 UTC m=+0.272878489 container init 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20260112.1, release=1766032510, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, build-date=2026-01-12T23:32:04Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:10:51 localhost python3[68862]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=1296029e90a465a2201c8dc6f8be17e7 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Feb 1 03:10:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:10:51 localhost podman[69019]: 2026-02-01 08:10:51.106614912 +0000 UTC m=+0.811785543 container died a0f2464871378e941563090e4710cf646859b636bf3611034b45ded0159cbf88 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, release=1766032510, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, container_name=configure_cms_options, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:10:51 localhost podman[69269]: 2026-02-01 08:10:51.15970926 +0000 UTC m=+0.369562170 container start 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1766032510, container_name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git) Feb 1 03:10:51 localhost python3[68862]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1296029e90a465a2201c8dc6f8be17e7 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 1 03:10:51 localhost podman[69393]: 2026-02-01 08:10:51.179478012 +0000 UTC m=+0.067368285 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=nova_migration_target, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:10:51 localhost podman[69328]: 2026-02-01 08:10:51.268477259 +0000 UTC m=+0.250221163 container cleanup a0f2464871378e941563090e4710cf646859b636bf3611034b45ded0159cbf88 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=configure_cms_options, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:10:51 localhost systemd[1]: libpod-conmon-a0f2464871378e941563090e4710cf646859b636bf3611034b45ded0159cbf88.scope: Deactivated successfully. Feb 1 03:10:51 localhost python3[68862]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1769931690 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Feb 1 03:10:51 localhost podman[69460]: 2026-02-01 08:10:51.327011201 +0000 UTC m=+0.068489292 container create 8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, distribution-scope=public, container_name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:10:51 localhost systemd[1]: Started libpod-conmon-8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d.scope. Feb 1 03:10:51 localhost podman[69460]: 2026-02-01 08:10:51.284938576 +0000 UTC m=+0.026416657 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 1 03:10:51 localhost systemd[1]: Started libcrun container. Feb 1 03:10:51 localhost podman[69460]: 2026-02-01 08:10:51.433328591 +0000 UTC m=+0.174806682 container init 8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510) Feb 1 03:10:51 localhost podman[69460]: 2026-02-01 08:10:51.442246457 +0000 UTC m=+0.183724548 container start 8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=setup_ovs_manager, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git) Feb 1 03:10:51 localhost podman[69460]: 2026-02-01 08:10:51.442776413 +0000 UTC m=+0.184254504 container attach 8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, distribution-scope=public, release=1766032510, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 1 03:10:51 localhost podman[69393]: 2026-02-01 08:10:51.542043779 +0000 UTC m=+0.429934112 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 1 03:10:51 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:10:52 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Feb 1 03:10:54 localhost ovs-vsctl[69640]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Feb 1 03:10:54 localhost systemd[1]: libpod-8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d.scope: Deactivated successfully. Feb 1 03:10:54 localhost systemd[1]: libpod-8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d.scope: Consumed 2.850s CPU time. Feb 1 03:10:54 localhost podman[69641]: 2026-02-01 08:10:54.396401157 +0000 UTC m=+0.053539574 container died 8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, container_name=setup_ovs_manager, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510) Feb 1 03:10:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d-userdata-shm.mount: Deactivated successfully. Feb 1 03:10:54 localhost systemd[1]: var-lib-containers-storage-overlay-877c65e867b205f11a32fcdb99f229d7cc1aad0815e744014cf57490bce97673-merged.mount: Deactivated successfully. Feb 1 03:10:54 localhost podman[69641]: 2026-02-01 08:10:54.439955399 +0000 UTC m=+0.097093816 container cleanup 8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, architecture=x86_64, container_name=setup_ovs_manager, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Feb 1 03:10:54 localhost systemd[1]: libpod-conmon-8753f2d44977c485d36fd45e7d5b92f1e769f0230bda34ef2ee7941aa029294d.scope: Deactivated successfully. Feb 1 03:10:54 localhost python3[68862]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1769931690 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1769931690'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Feb 1 03:10:54 localhost podman[69747]: 2026-02-01 08:10:54.88241431 +0000 UTC m=+0.078445620 container create e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 1 03:10:54 localhost podman[69758]: 2026-02-01 08:10:54.929561358 +0000 UTC m=+0.099009067 container create e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 1 03:10:54 localhost systemd[1]: Started libpod-conmon-e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.scope. Feb 1 03:10:54 localhost podman[69747]: 2026-02-01 08:10:54.841651477 +0000 UTC m=+0.037682827 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 1 03:10:54 localhost systemd[1]: Started libcrun container. Feb 1 03:10:54 localhost systemd[1]: Started libpod-conmon-e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.scope. Feb 1 03:10:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6e92d81edd57130eba0dea91809d1be824b840176ebe669287b6264f5d2d37/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6e92d81edd57130eba0dea91809d1be824b840176ebe669287b6264f5d2d37/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6e92d81edd57130eba0dea91809d1be824b840176ebe669287b6264f5d2d37/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:54 localhost systemd[1]: Started libcrun container. Feb 1 03:10:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d506918155a93476a6405c9e2c98cb06d7e575d23557b96e2d10a36860f0cb4c/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d506918155a93476a6405c9e2c98cb06d7e575d23557b96e2d10a36860f0cb4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d506918155a93476a6405c9e2c98cb06d7e575d23557b96e2d10a36860f0cb4c/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Feb 1 03:10:54 localhost podman[69758]: 2026-02-01 08:10:54.883711232 +0000 UTC m=+0.053158981 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 1 03:10:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:10:54 localhost podman[69747]: 2026-02-01 08:10:54.993924427 +0000 UTC m=+0.189955717 container init e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2026-01-12T22:36:40Z, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510) Feb 1 03:10:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:10:55 localhost podman[69758]: 2026-02-01 08:10:55.015353532 +0000 UTC m=+0.184801251 container init e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, version=17.1.13, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:10:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:10:55 localhost podman[69747]: 2026-02-01 08:10:55.037895833 +0000 UTC m=+0.233927133 container start e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, distribution-scope=public, release=1766032510) Feb 1 03:10:55 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:10:55 localhost python3[68862]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 1 03:10:55 localhost systemd[1]: Created slice User Slice of UID 0. Feb 1 03:10:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:10:55 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 1 03:10:55 localhost podman[69758]: 2026-02-01 08:10:55.081838568 +0000 UTC m=+0.251286277 container start e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 1 03:10:55 localhost python3[68862]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=08ca8fb8877681656a098784127ead43 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 1 03:10:55 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 1 03:10:55 localhost systemd[1]: Starting User Manager for UID 0... Feb 1 03:10:55 localhost podman[69809]: 2026-02-01 08:10:55.170222315 +0000 UTC m=+0.079101331 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, container_name=ovn_metadata_agent, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, url=https://www.redhat.com) Feb 1 03:10:55 localhost podman[69809]: 2026-02-01 08:10:55.181464365 +0000 UTC m=+0.090343361 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git) Feb 1 03:10:55 localhost podman[69809]: unhealthy Feb 1 03:10:55 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:10:55 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:10:55 localhost podman[69793]: 2026-02-01 08:10:55.152419135 +0000 UTC m=+0.104265155 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, distribution-scope=public, container_name=ovn_controller, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:10:55 localhost podman[69793]: 2026-02-01 08:10:55.239516321 +0000 UTC m=+0.191362331 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible) Feb 1 03:10:55 localhost podman[69793]: unhealthy Feb 1 03:10:55 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:10:55 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:10:55 localhost systemd[69818]: Queued start job for default target Main User Target. Feb 1 03:10:55 localhost systemd[69818]: Created slice User Application Slice. Feb 1 03:10:55 localhost systemd[69818]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 1 03:10:55 localhost systemd[69818]: Started Daily Cleanup of User's Temporary Directories. Feb 1 03:10:55 localhost systemd[69818]: Reached target Paths. Feb 1 03:10:55 localhost systemd[69818]: Reached target Timers. Feb 1 03:10:55 localhost systemd[69818]: Starting D-Bus User Message Bus Socket... Feb 1 03:10:55 localhost systemd[69818]: Starting Create User's Volatile Files and Directories... Feb 1 03:10:55 localhost systemd[69818]: Finished Create User's Volatile Files and Directories. Feb 1 03:10:55 localhost systemd[69818]: Listening on D-Bus User Message Bus Socket. Feb 1 03:10:55 localhost systemd[69818]: Reached target Sockets. Feb 1 03:10:55 localhost systemd[69818]: Reached target Basic System. Feb 1 03:10:55 localhost systemd[69818]: Reached target Main User Target. Feb 1 03:10:55 localhost systemd[69818]: Startup finished in 161ms. Feb 1 03:10:55 localhost systemd[1]: Started User Manager for UID 0. Feb 1 03:10:55 localhost systemd[1]: Started Session c9 of User root. Feb 1 03:10:55 localhost systemd[1]: session-c9.scope: Deactivated successfully. Feb 1 03:10:55 localhost kernel: device br-int entered promiscuous mode Feb 1 03:10:55 localhost NetworkManager[5972]: [1769933455.4220] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Feb 1 03:10:55 localhost systemd-udevd[69897]: Network interface NamePolicy= disabled on kernel command line. Feb 1 03:10:56 localhost python3[69917]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:56 localhost python3[69933]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:56 localhost kernel: device genev_sys_6081 entered promiscuous mode Feb 1 03:10:56 localhost systemd-udevd[69899]: Network interface NamePolicy= disabled on kernel command line. Feb 1 03:10:56 localhost NetworkManager[5972]: [1769933456.4924] device (genev_sys_6081): carrier: link connected Feb 1 03:10:56 localhost NetworkManager[5972]: [1769933456.4931] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Feb 1 03:10:56 localhost python3[69949]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:56 localhost python3[69971]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:57 localhost python3[69988]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:57 localhost python3[70004]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:57 localhost python3[70022]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:10:57 localhost python3[70040]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:10:58 localhost python3[70056]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:10:58 localhost python3[70072]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:10:58 localhost python3[70088]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:10:58 localhost python3[70104]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:10:59 localhost python3[70165]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933458.95486-108569-263935627041416/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:10:59 localhost python3[70195]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933458.95486-108569-263935627041416/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:11:00 localhost python3[70224]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933458.95486-108569-263935627041416/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:11:00 localhost python3[70253]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933458.95486-108569-263935627041416/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:11:01 localhost python3[70283]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933458.95486-108569-263935627041416/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:11:01 localhost python3[70312]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933458.95486-108569-263935627041416/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:11:02 localhost python3[70328]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 03:11:02 localhost systemd[1]: Reloading. Feb 1 03:11:02 localhost systemd-sysv-generator[70355]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:11:02 localhost systemd-rc-local-generator[70352]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:11:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:11:03 localhost python3[70380]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:11:03 localhost systemd[1]: Reloading. Feb 1 03:11:03 localhost systemd-rc-local-generator[70409]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:11:03 localhost systemd-sysv-generator[70414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:11:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:11:03 localhost systemd[1]: Starting ceilometer_agent_compute container... Feb 1 03:11:03 localhost tripleo-start-podman-container[70420]: Creating additional drop-in dependency for "ceilometer_agent_compute" (35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9) Feb 1 03:11:03 localhost systemd[1]: Reloading. Feb 1 03:11:03 localhost systemd-rc-local-generator[70474]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:11:03 localhost systemd-sysv-generator[70477]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:11:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:11:04 localhost systemd[1]: Started ceilometer_agent_compute container. Feb 1 03:11:04 localhost python3[70503]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:11:04 localhost systemd[1]: Reloading. Feb 1 03:11:04 localhost systemd-sysv-generator[70536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:11:04 localhost systemd-rc-local-generator[70533]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:11:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:11:05 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Feb 1 03:11:05 localhost systemd[1]: Started ceilometer_agent_ipmi container. Feb 1 03:11:05 localhost systemd[1]: Stopping User Manager for UID 0... Feb 1 03:11:05 localhost systemd[69818]: Activating special unit Exit the Session... Feb 1 03:11:05 localhost systemd[69818]: Stopped target Main User Target. Feb 1 03:11:05 localhost systemd[69818]: Stopped target Basic System. Feb 1 03:11:05 localhost systemd[69818]: Stopped target Paths. Feb 1 03:11:05 localhost systemd[69818]: Stopped target Sockets. Feb 1 03:11:05 localhost systemd[69818]: Stopped target Timers. Feb 1 03:11:05 localhost systemd[69818]: Stopped Daily Cleanup of User's Temporary Directories. Feb 1 03:11:05 localhost systemd[69818]: Closed D-Bus User Message Bus Socket. Feb 1 03:11:05 localhost systemd[69818]: Stopped Create User's Volatile Files and Directories. Feb 1 03:11:05 localhost systemd[69818]: Removed slice User Application Slice. Feb 1 03:11:05 localhost systemd[69818]: Reached target Shutdown. Feb 1 03:11:05 localhost systemd[69818]: Finished Exit the Session. Feb 1 03:11:05 localhost systemd[69818]: Reached target Exit the Session. Feb 1 03:11:05 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 1 03:11:05 localhost systemd[1]: Stopped User Manager for UID 0. Feb 1 03:11:05 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 1 03:11:05 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 1 03:11:05 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 1 03:11:05 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 1 03:11:05 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 1 03:11:06 localhost python3[70574]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:11:06 localhost systemd[1]: Reloading. Feb 1 03:11:06 localhost systemd-rc-local-generator[70602]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:11:06 localhost systemd-sysv-generator[70606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:11:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:11:06 localhost systemd[1]: Starting logrotate_crond container... Feb 1 03:11:06 localhost systemd[1]: Started logrotate_crond container. Feb 1 03:11:07 localhost python3[70641]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:11:07 localhost systemd[1]: Reloading. Feb 1 03:11:07 localhost systemd-rc-local-generator[70673]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:11:07 localhost systemd-sysv-generator[70676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:11:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:11:07 localhost systemd[1]: Starting nova_migration_target container... Feb 1 03:11:07 localhost systemd[1]: Started nova_migration_target container. Feb 1 03:11:08 localhost python3[70709]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:11:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:11:08 localhost systemd[1]: Reloading. Feb 1 03:11:08 localhost systemd-rc-local-generator[70744]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:11:08 localhost systemd-sysv-generator[70748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:11:08 localhost podman[70711]: 2026-02-01 08:11:08.438266365 +0000 UTC m=+0.104510723 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:11:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:11:08 localhost systemd[1]: Starting ovn_controller container... Feb 1 03:11:08 localhost podman[70711]: 2026-02-01 08:11:08.648887731 +0000 UTC m=+0.315132119 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510) Feb 1 03:11:08 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:11:08 localhost tripleo-start-podman-container[70776]: Creating additional drop-in dependency for "ovn_controller" (e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257) Feb 1 03:11:08 localhost systemd[1]: Reloading. Feb 1 03:11:08 localhost systemd-sysv-generator[70835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:11:08 localhost systemd-rc-local-generator[70830]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:11:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:11:09 localhost systemd[1]: Started ovn_controller container. Feb 1 03:11:09 localhost python3[70861]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:11:09 localhost systemd[1]: Reloading. Feb 1 03:11:09 localhost systemd-sysv-generator[70889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:11:09 localhost systemd-rc-local-generator[70886]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:11:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:11:10 localhost systemd[1]: Starting ovn_metadata_agent container... Feb 1 03:11:10 localhost systemd[1]: Started ovn_metadata_agent container. Feb 1 03:11:10 localhost python3[70943]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:11:12 localhost python3[71064]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005604215 step=4 update_config_hash_only=False Feb 1 03:11:12 localhost python3[71080]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:11:12 localhost python3[71096]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 1 03:11:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:11:17 localhost systemd[1]: tmp-crun.16ISi1.mount: Deactivated successfully. Feb 1 03:11:17 localhost podman[71098]: 2026-02-01 08:11:17.87310671 +0000 UTC m=+0.088785791 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:11:17 localhost podman[71098]: 2026-02-01 08:11:17.93565132 +0000 UTC m=+0.151330351 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git) Feb 1 03:11:17 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:11:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:11:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:11:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:11:20 localhost podman[71119]: 2026-02-01 08:11:20.894673967 +0000 UTC m=+0.109035298 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, container_name=iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3) Feb 1 03:11:20 localhost systemd[1]: tmp-crun.N2KxfN.mount: Deactivated successfully. Feb 1 03:11:20 localhost systemd[1]: tmp-crun.6oV0cG.mount: Deactivated successfully. Feb 1 03:11:20 localhost podman[71120]: 2026-02-01 08:11:20.937677512 +0000 UTC m=+0.150599558 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team) Feb 1 03:11:20 localhost podman[71119]: 2026-02-01 08:11:20.955661667 +0000 UTC m=+0.170023018 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, com.redhat.component=openstack-iscsid-container) Feb 1 03:11:20 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:11:21 localhost podman[71148]: 2026-02-01 08:11:21.038263439 +0000 UTC m=+0.135605709 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:11:21 localhost podman[71120]: 2026-02-01 08:11:21.044688964 +0000 UTC m=+0.257611010 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:11:21 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:11:21 localhost podman[71148]: 2026-02-01 08:11:21.074685164 +0000 UTC m=+0.172027454 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:11:21 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:11:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:11:21 localhost podman[71181]: 2026-02-01 08:11:21.177686548 +0000 UTC m=+0.073470081 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:11:21 localhost podman[71181]: 2026-02-01 08:11:21.213805092 +0000 UTC m=+0.109588655 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.13, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:11:21 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:11:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:11:21 localhost podman[71208]: 2026-02-01 08:11:21.858908065 +0000 UTC m=+0.077899723 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T23:32:04Z, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Feb 1 03:11:22 localhost podman[71208]: 2026-02-01 08:11:22.202689949 +0000 UTC m=+0.421681577 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, release=1766032510, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:11:22 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:11:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:11:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:11:25 localhost podman[71231]: 2026-02-01 08:11:25.871562957 +0000 UTC m=+0.081147457 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, release=1766032510) Feb 1 03:11:25 localhost podman[71230]: 2026-02-01 08:11:25.930254284 +0000 UTC m=+0.142111176 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510) Feb 1 03:11:25 localhost podman[71231]: 2026-02-01 08:11:25.950518142 +0000 UTC m=+0.160102662 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., container_name=ovn_controller, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 1 03:11:25 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:11:26 localhost podman[71230]: 2026-02-01 08:11:26.000900003 +0000 UTC m=+0.212756845 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 1 03:11:26 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:11:34 localhost snmpd[67757]: empty variable list in _query Feb 1 03:11:34 localhost snmpd[67757]: empty variable list in _query Feb 1 03:11:37 localhost podman[71462]: Feb 1 03:11:37 localhost podman[71462]: 2026-02-01 08:11:37.69056746 +0000 UTC m=+0.087311153 container create 5a15f0085ce89435566d8ba897baad1bb098ea7c11f0f155254e8c0194ab996e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_galileo, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, architecture=x86_64, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 03:11:37 localhost systemd[1]: Started libpod-conmon-5a15f0085ce89435566d8ba897baad1bb098ea7c11f0f155254e8c0194ab996e.scope. Feb 1 03:11:37 localhost systemd[1]: Started libcrun container. Feb 1 03:11:37 localhost podman[71462]: 2026-02-01 08:11:37.652242494 +0000 UTC m=+0.048986287 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 03:11:37 localhost podman[71462]: 2026-02-01 08:11:37.758997229 +0000 UTC m=+0.155740922 container init 5a15f0085ce89435566d8ba897baad1bb098ea7c11f0f155254e8c0194ab996e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_galileo, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, version=7, release=1764794109, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Feb 1 03:11:37 localhost podman[71462]: 2026-02-01 08:11:37.768032698 +0000 UTC m=+0.164776391 container start 5a15f0085ce89435566d8ba897baad1bb098ea7c11f0f155254e8c0194ab996e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_galileo, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, io.buildah.version=1.41.4, release=1764794109, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, name=rhceph) Feb 1 03:11:37 localhost podman[71462]: 2026-02-01 08:11:37.768230154 +0000 UTC m=+0.164973877 container attach 5a15f0085ce89435566d8ba897baad1bb098ea7c11f0f155254e8c0194ab996e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_galileo, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 03:11:37 localhost vibrant_galileo[71477]: 167 167 Feb 1 03:11:37 localhost systemd[1]: libpod-5a15f0085ce89435566d8ba897baad1bb098ea7c11f0f155254e8c0194ab996e.scope: Deactivated successfully. Feb 1 03:11:37 localhost podman[71462]: 2026-02-01 08:11:37.771469137 +0000 UTC m=+0.168212870 container died 5a15f0085ce89435566d8ba897baad1bb098ea7c11f0f155254e8c0194ab996e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_galileo, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, release=1764794109, maintainer=Guillaume Abrioux , io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, name=rhceph, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 03:11:37 localhost podman[71482]: 2026-02-01 08:11:37.852497889 +0000 UTC m=+0.069498574 container remove 5a15f0085ce89435566d8ba897baad1bb098ea7c11f0f155254e8c0194ab996e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_galileo, io.buildah.version=1.41.4, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 03:11:37 localhost systemd[1]: libpod-conmon-5a15f0085ce89435566d8ba897baad1bb098ea7c11f0f155254e8c0194ab996e.scope: Deactivated successfully. Feb 1 03:11:38 localhost podman[71504]: Feb 1 03:11:38 localhost podman[71504]: 2026-02-01 08:11:38.07076283 +0000 UTC m=+0.074366410 container create 9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_clarke, vcs-type=git, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, RELEASE=main, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109) Feb 1 03:11:38 localhost systemd[1]: Started libpod-conmon-9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629.scope. Feb 1 03:11:38 localhost systemd[1]: Started libcrun container. Feb 1 03:11:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e956024c4ea0a50cd3fc62ced1d5cc76cf0f4c15a20dc99218f407d2180e04d/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 03:11:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e956024c4ea0a50cd3fc62ced1d5cc76cf0f4c15a20dc99218f407d2180e04d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 03:11:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e956024c4ea0a50cd3fc62ced1d5cc76cf0f4c15a20dc99218f407d2180e04d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 03:11:38 localhost podman[71504]: 2026-02-01 08:11:38.042253148 +0000 UTC m=+0.045856778 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 03:11:38 localhost podman[71504]: 2026-02-01 08:11:38.142136002 +0000 UTC m=+0.145739582 container init 9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_clarke, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 03:11:38 localhost podman[71504]: 2026-02-01 08:11:38.151188082 +0000 UTC m=+0.154791652 container start 9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_clarke, release=1764794109, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64) Feb 1 03:11:38 localhost podman[71504]: 2026-02-01 08:11:38.15142332 +0000 UTC m=+0.155026900 container attach 9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_clarke, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, release=1764794109, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 03:11:38 localhost systemd[1]: var-lib-containers-storage-overlay-353506b63b93b75ce8538100a86eed5b102bad6aeda38da254d7ef347a8b2b60-merged.mount: Deactivated successfully. Feb 1 03:11:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:11:38 localhost podman[71683]: 2026-02-01 08:11:38.798434362 +0000 UTC m=+0.071931811 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, version=17.1.13, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:11:39 localhost podman[71683]: 2026-02-01 08:11:39.000696081 +0000 UTC m=+0.274193550 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, version=17.1.13, batch=17.1_20260112.1, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container) Feb 1 03:11:39 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:11:39 localhost tender_clarke[71519]: [ Feb 1 03:11:39 localhost tender_clarke[71519]: { Feb 1 03:11:39 localhost tender_clarke[71519]: "available": false, Feb 1 03:11:39 localhost tender_clarke[71519]: "ceph_device": false, Feb 1 03:11:39 localhost tender_clarke[71519]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 1 03:11:39 localhost tender_clarke[71519]: "lsm_data": {}, Feb 1 03:11:39 localhost tender_clarke[71519]: "lvs": [], Feb 1 03:11:39 localhost tender_clarke[71519]: "path": "/dev/sr0", Feb 1 03:11:39 localhost tender_clarke[71519]: "rejected_reasons": [ Feb 1 03:11:39 localhost tender_clarke[71519]: "Insufficient space (<5GB)", Feb 1 03:11:39 localhost tender_clarke[71519]: "Has a FileSystem" Feb 1 03:11:39 localhost tender_clarke[71519]: ], Feb 1 03:11:39 localhost tender_clarke[71519]: "sys_api": { Feb 1 03:11:39 localhost tender_clarke[71519]: "actuators": null, Feb 1 03:11:39 localhost tender_clarke[71519]: "device_nodes": "sr0", Feb 1 03:11:39 localhost tender_clarke[71519]: "human_readable_size": "482.00 KB", Feb 1 03:11:39 localhost tender_clarke[71519]: "id_bus": "ata", Feb 1 03:11:39 localhost tender_clarke[71519]: "model": "QEMU DVD-ROM", Feb 1 03:11:39 localhost tender_clarke[71519]: "nr_requests": "2", Feb 1 03:11:39 localhost tender_clarke[71519]: "partitions": {}, Feb 1 03:11:39 localhost tender_clarke[71519]: "path": "/dev/sr0", Feb 1 03:11:39 localhost tender_clarke[71519]: "removable": "1", Feb 1 03:11:39 localhost tender_clarke[71519]: "rev": "2.5+", Feb 1 03:11:39 localhost tender_clarke[71519]: "ro": "0", Feb 1 03:11:39 localhost tender_clarke[71519]: "rotational": "1", Feb 1 03:11:39 localhost tender_clarke[71519]: "sas_address": "", Feb 1 03:11:39 localhost tender_clarke[71519]: "sas_device_handle": "", Feb 1 03:11:39 localhost tender_clarke[71519]: "scheduler_mode": "mq-deadline", Feb 1 03:11:39 localhost tender_clarke[71519]: "sectors": 0, Feb 1 03:11:39 localhost tender_clarke[71519]: "sectorsize": "2048", Feb 1 03:11:39 localhost tender_clarke[71519]: "size": 493568.0, Feb 1 03:11:39 localhost tender_clarke[71519]: "support_discard": "0", Feb 1 03:11:39 localhost tender_clarke[71519]: "type": "disk", Feb 1 03:11:39 localhost tender_clarke[71519]: "vendor": "QEMU" Feb 1 03:11:39 localhost tender_clarke[71519]: } Feb 1 03:11:39 localhost tender_clarke[71519]: } Feb 1 03:11:39 localhost tender_clarke[71519]: ] Feb 1 03:11:39 localhost systemd[1]: libpod-9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629.scope: Deactivated successfully. Feb 1 03:11:39 localhost systemd[1]: libpod-9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629.scope: Consumed 1.093s CPU time. Feb 1 03:11:39 localhost podman[71504]: 2026-02-01 08:11:39.23520846 +0000 UTC m=+1.238812060 container died 9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_clarke, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 03:11:39 localhost podman[73566]: 2026-02-01 08:11:39.354976001 +0000 UTC m=+0.109562565 container remove 9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_clarke, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , name=rhceph, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, release=1764794109, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Feb 1 03:11:39 localhost systemd[1]: libpod-conmon-9c041cfcdfac3e8e83e127562a1757b4394b8eb619eeedd29c2cf3217b05c629.scope: Deactivated successfully. Feb 1 03:11:39 localhost systemd[1]: var-lib-containers-storage-overlay-3e956024c4ea0a50cd3fc62ced1d5cc76cf0f4c15a20dc99218f407d2180e04d-merged.mount: Deactivated successfully. Feb 1 03:11:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:11:48 localhost systemd[1]: tmp-crun.3jHdLq.mount: Deactivated successfully. Feb 1 03:11:48 localhost podman[73594]: 2026-02-01 08:11:48.881129967 +0000 UTC m=+0.096713375 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_id=tripleo_step3, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Feb 1 03:11:48 localhost podman[73594]: 2026-02-01 08:11:48.892019135 +0000 UTC m=+0.107602533 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container) Feb 1 03:11:48 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:11:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:11:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:11:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:11:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:11:51 localhost podman[73616]: 2026-02-01 08:11:51.875702709 +0000 UTC m=+0.082957045 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5) Feb 1 03:11:51 localhost systemd[1]: tmp-crun.5IqZ5U.mount: Deactivated successfully. Feb 1 03:11:51 localhost podman[73615]: 2026-02-01 08:11:51.933548718 +0000 UTC m=+0.140753352 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 1 03:11:51 localhost podman[73615]: 2026-02-01 08:11:51.942799215 +0000 UTC m=+0.150003869 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, build-date=2026-01-12T22:34:43Z, distribution-scope=public, container_name=iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:11:51 localhost podman[73616]: 2026-02-01 08:11:51.958803756 +0000 UTC m=+0.166058142 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:11:51 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:11:51 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:11:52 localhost podman[73617]: 2026-02-01 08:11:52.038252118 +0000 UTC m=+0.241582828 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, io.buildah.version=1.41.5) Feb 1 03:11:52 localhost podman[73614]: 2026-02-01 08:11:52.084184846 +0000 UTC m=+0.294217550 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:11:52 localhost podman[73614]: 2026-02-01 08:11:52.089465435 +0000 UTC m=+0.299498189 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:11:52 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:11:52 localhost podman[73617]: 2026-02-01 08:11:52.108502044 +0000 UTC m=+0.311832694 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:11:52 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:11:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:11:54 localhost podman[73706]: 2026-02-01 08:11:54.026177075 +0000 UTC m=+0.075300169 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 1 03:11:54 localhost podman[73706]: 2026-02-01 08:11:54.406701695 +0000 UTC m=+0.455824799 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:11:54 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:11:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:11:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:11:56 localhost podman[73730]: 2026-02-01 08:11:56.854152578 +0000 UTC m=+0.070382602 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=ovn_controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.13, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team) Feb 1 03:11:56 localhost podman[73730]: 2026-02-01 08:11:56.877609218 +0000 UTC m=+0.093839252 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:11:56 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:11:56 localhost podman[73729]: 2026-02-01 08:11:56.964524168 +0000 UTC m=+0.179152441 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, architecture=x86_64, config_id=tripleo_step4, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:11:57 localhost podman[73729]: 2026-02-01 08:11:57.042501942 +0000 UTC m=+0.257130165 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13) Feb 1 03:11:57 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:12:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:12:09 localhost podman[73778]: 2026-02-01 08:12:09.866381086 +0000 UTC m=+0.082344075 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:12:10 localhost podman[73778]: 2026-02-01 08:12:10.08381454 +0000 UTC m=+0.299777539 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, distribution-scope=public, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team) Feb 1 03:12:10 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:12:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:12:19 localhost systemd[1]: tmp-crun.TqhkHn.mount: Deactivated successfully. Feb 1 03:12:19 localhost podman[73808]: 2026-02-01 08:12:19.869104632 +0000 UTC m=+0.084482643 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:12:19 localhost podman[73808]: 2026-02-01 08:12:19.905604059 +0000 UTC m=+0.120982040 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc.) Feb 1 03:12:19 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:12:22 localhost systemd[1]: tmp-crun.MU4ZH8.mount: Deactivated successfully. Feb 1 03:12:22 localhost podman[73838]: 2026-02-01 08:12:22.891777783 +0000 UTC m=+0.085445803 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, architecture=x86_64, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 1 03:12:22 localhost systemd[1]: tmp-crun.QHbXGK.mount: Deactivated successfully. Feb 1 03:12:22 localhost podman[73831]: 2026-02-01 08:12:22.935529793 +0000 UTC m=+0.136471906 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:12:22 localhost podman[73838]: 2026-02-01 08:12:22.939245572 +0000 UTC m=+0.132913552 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public) Feb 1 03:12:22 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:12:22 localhost podman[73829]: 2026-02-01 08:12:22.863767188 +0000 UTC m=+0.071211868 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, batch=17.1_20260112.1) Feb 1 03:12:22 localhost podman[73830]: 2026-02-01 08:12:22.979929243 +0000 UTC m=+0.179214812 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, container_name=iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z) Feb 1 03:12:22 localhost podman[73830]: 2026-02-01 08:12:22.991696389 +0000 UTC m=+0.190982028 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, architecture=x86_64, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vendor=Red Hat, Inc.) Feb 1 03:12:23 localhost podman[73829]: 2026-02-01 08:12:23.000898473 +0000 UTC m=+0.208343173 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vendor=Red Hat, Inc.) Feb 1 03:12:23 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:12:23 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:12:23 localhost podman[73831]: 2026-02-01 08:12:23.042887946 +0000 UTC m=+0.243830079 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:12:23 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:12:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:12:24 localhost systemd[1]: tmp-crun.UYY8vW.mount: Deactivated successfully. Feb 1 03:12:24 localhost podman[73916]: 2026-02-01 08:12:24.859698321 +0000 UTC m=+0.079100690 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.13, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:12:25 localhost podman[73916]: 2026-02-01 08:12:25.236552923 +0000 UTC m=+0.455955212 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:12:25 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:12:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:12:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:12:27 localhost podman[73939]: 2026-02-01 08:12:27.860399359 +0000 UTC m=+0.079732121 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ovn_metadata_agent) Feb 1 03:12:27 localhost systemd[1]: tmp-crun.gpYMXx.mount: Deactivated successfully. Feb 1 03:12:27 localhost podman[73940]: 2026-02-01 08:12:27.912398732 +0000 UTC m=+0.130855265 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible) Feb 1 03:12:27 localhost podman[73940]: 2026-02-01 08:12:27.958684703 +0000 UTC m=+0.177141266 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, release=1766032510, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public) Feb 1 03:12:27 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:12:28 localhost podman[73939]: 2026-02-01 08:12:28.015575872 +0000 UTC m=+0.234908654 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5) Feb 1 03:12:28 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:12:40 localhost systemd[1]: tmp-crun.suJi5q.mount: Deactivated successfully. Feb 1 03:12:40 localhost podman[74002]: 2026-02-01 08:12:40.232476813 +0000 UTC m=+0.098599645 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Feb 1 03:12:40 localhost podman[74002]: 2026-02-01 08:12:40.421973903 +0000 UTC m=+0.288096735 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:12:40 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:12:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:12:50 localhost podman[74093]: 2026-02-01 08:12:50.894048021 +0000 UTC m=+0.097420037 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, version=17.1.13, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, release=1766032510, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:12:50 localhost podman[74093]: 2026-02-01 08:12:50.935708294 +0000 UTC m=+0.139080260 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, batch=17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, container_name=collectd, architecture=x86_64, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:12:50 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:12:53 localhost systemd[1]: tmp-crun.oqzvO5.mount: Deactivated successfully. Feb 1 03:12:53 localhost podman[74115]: 2026-02-01 08:12:53.908904332 +0000 UTC m=+0.112406815 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:12:53 localhost podman[74113]: 2026-02-01 08:12:53.874143041 +0000 UTC m=+0.087168318 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, name=rhosp-rhel9/openstack-cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Feb 1 03:12:53 localhost podman[74115]: 2026-02-01 08:12:53.933260482 +0000 UTC m=+0.136762945 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 1 03:12:53 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:12:53 localhost podman[74113]: 2026-02-01 08:12:53.957831818 +0000 UTC m=+0.170857085 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, url=https://www.redhat.com, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:12:53 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:12:54 localhost podman[74114]: 2026-02-01 08:12:54.034171349 +0000 UTC m=+0.243699145 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1766032510, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T22:34:43Z) Feb 1 03:12:54 localhost podman[74114]: 2026-02-01 08:12:54.044596983 +0000 UTC m=+0.254124799 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:12:54 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:12:54 localhost podman[74116]: 2026-02-01 08:12:54.099960534 +0000 UTC m=+0.302613279 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5) Feb 1 03:12:54 localhost podman[74116]: 2026-02-01 08:12:54.130660445 +0000 UTC m=+0.333313180 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:12:54 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:12:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:12:56 localhost podman[74200]: 2026-02-01 08:12:56.017277493 +0000 UTC m=+0.079523144 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5) Feb 1 03:12:56 localhost podman[74200]: 2026-02-01 08:12:56.343498756 +0000 UTC m=+0.405744407 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, vcs-type=git, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.buildah.version=1.41.5) Feb 1 03:12:56 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:12:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:12:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:12:58 localhost systemd[1]: tmp-crun.QWpGqS.mount: Deactivated successfully. Feb 1 03:12:58 localhost podman[74224]: 2026-02-01 08:12:58.862475028 +0000 UTC m=+0.081528329 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510) Feb 1 03:12:58 localhost podman[74225]: 2026-02-01 08:12:58.913557252 +0000 UTC m=+0.131429565 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, release=1766032510, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:12:58 localhost podman[74224]: 2026-02-01 08:12:58.924784251 +0000 UTC m=+0.143837512 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com) Feb 1 03:12:58 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:12:58 localhost podman[74225]: 2026-02-01 08:12:58.936833096 +0000 UTC m=+0.154705429 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Feb 1 03:12:58 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:13:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:13:10 localhost podman[74272]: 2026-02-01 08:13:10.863477693 +0000 UTC m=+0.079556876 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=metrics_qdr, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z) Feb 1 03:13:11 localhost podman[74272]: 2026-02-01 08:13:11.060672669 +0000 UTC m=+0.276751872 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, version=17.1.13, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:13:11 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:13:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:13:21 localhost podman[74300]: 2026-02-01 08:13:21.867462921 +0000 UTC m=+0.082386406 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, release=1766032510, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, vcs-type=git) Feb 1 03:13:21 localhost podman[74300]: 2026-02-01 08:13:21.902655517 +0000 UTC m=+0.117579022 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 1 03:13:21 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:13:24 localhost systemd[1]: tmp-crun.MKupGY.mount: Deactivated successfully. Feb 1 03:13:24 localhost podman[74321]: 2026-02-01 08:13:24.878901493 +0000 UTC m=+0.090127724 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, architecture=x86_64, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:13:24 localhost podman[74321]: 2026-02-01 08:13:24.891677121 +0000 UTC m=+0.102903382 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:13:24 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:13:24 localhost podman[74322]: 2026-02-01 08:13:24.933371395 +0000 UTC m=+0.141466336 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-type=git, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, release=1766032510, config_id=tripleo_step3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=) Feb 1 03:13:24 localhost podman[74322]: 2026-02-01 08:13:24.946113813 +0000 UTC m=+0.154208694 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 1 03:13:24 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:13:25 localhost podman[74323]: 2026-02-01 08:13:25.000656927 +0000 UTC m=+0.207759916 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com) Feb 1 03:13:25 localhost podman[74323]: 2026-02-01 08:13:25.029665754 +0000 UTC m=+0.236768713 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 1 03:13:25 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:13:25 localhost podman[74324]: 2026-02-01 08:13:25.087203865 +0000 UTC m=+0.288558400 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, tcib_managed=true) Feb 1 03:13:25 localhost podman[74324]: 2026-02-01 08:13:25.117797693 +0000 UTC m=+0.319152308 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:13:25 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:13:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:13:26 localhost podman[74410]: 2026-02-01 08:13:26.864961041 +0000 UTC m=+0.081793367 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:13:27 localhost podman[74410]: 2026-02-01 08:13:27.258607221 +0000 UTC m=+0.475439507 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team) Feb 1 03:13:27 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:13:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:13:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:13:29 localhost systemd[1]: tmp-crun.ZLuQeI.mount: Deactivated successfully. Feb 1 03:13:29 localhost podman[74433]: 2026-02-01 08:13:29.878619725 +0000 UTC m=+0.088913914 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git) Feb 1 03:13:29 localhost podman[74434]: 2026-02-01 08:13:29.919487983 +0000 UTC m=+0.126307681 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:13:29 localhost podman[74433]: 2026-02-01 08:13:29.967904251 +0000 UTC m=+0.178198430 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public) Feb 1 03:13:29 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:13:30 localhost podman[74434]: 2026-02-01 08:13:30.024323965 +0000 UTC m=+0.231143683 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, architecture=x86_64, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:13:30 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:13:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:13:41 localhost podman[74479]: 2026-02-01 08:13:41.878410703 +0000 UTC m=+0.091753776 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container) Feb 1 03:13:42 localhost podman[74479]: 2026-02-01 08:13:42.078734989 +0000 UTC m=+0.292078102 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, tcib_managed=true, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:13:42 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:13:43 localhost podman[74609]: 2026-02-01 08:13:43.465512281 +0000 UTC m=+0.099766431 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, version=7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:13:43 localhost podman[74609]: 2026-02-01 08:13:43.596746329 +0000 UTC m=+0.231000469 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.buildah.version=1.41.4, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, GIT_CLEAN=True, ceph=True, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main) Feb 1 03:13:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:13:52 localhost podman[74750]: 2026-02-01 08:13:52.904708763 +0000 UTC m=+0.117183601 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.13, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:13:52 localhost podman[74750]: 2026-02-01 08:13:52.917701577 +0000 UTC m=+0.130176375 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, io.buildah.version=1.41.5, container_name=collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, architecture=x86_64, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:13:52 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:13:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:13:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:13:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:13:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:13:55 localhost podman[74769]: 2026-02-01 08:13:55.855568298 +0000 UTC m=+0.066302617 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, container_name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible) Feb 1 03:13:55 localhost podman[74769]: 2026-02-01 08:13:55.887824148 +0000 UTC m=+0.098558457 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.13, architecture=x86_64, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5) Feb 1 03:13:55 localhost systemd[1]: tmp-crun.u2fwxg.mount: Deactivated successfully. Feb 1 03:13:55 localhost podman[74771]: 2026-02-01 08:13:55.911501374 +0000 UTC m=+0.115324072 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, build-date=2026-01-12T23:07:47Z, distribution-scope=public, config_id=tripleo_step4, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:13:55 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:13:55 localhost podman[74771]: 2026-02-01 08:13:55.956595823 +0000 UTC m=+0.160418561 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ceilometer_agent_compute, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:13:55 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:13:55 localhost podman[74770]: 2026-02-01 08:13:55.980214447 +0000 UTC m=+0.190028926 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, vcs-type=git, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Feb 1 03:13:56 localhost podman[74770]: 2026-02-01 08:13:56.014738738 +0000 UTC m=+0.224553177 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, distribution-scope=public, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 1 03:13:56 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:13:56 localhost podman[74777]: 2026-02-01 08:13:56.03106312 +0000 UTC m=+0.231177990 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:13:56 localhost podman[74777]: 2026-02-01 08:13:56.081412297 +0000 UTC m=+0.281527157 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:13:56 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:13:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:13:57 localhost podman[74856]: 2026-02-01 08:13:57.865107352 +0000 UTC m=+0.081555914 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1766032510) Feb 1 03:13:58 localhost python3[74926]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:13:58 localhost podman[74856]: 2026-02-01 08:13:58.247665551 +0000 UTC m=+0.464114153 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, release=1766032510, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:13:58 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:13:58 localhost python3[74972]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933637.8956897-112806-229797675245333/source _original_basename=tmpcshnf26x follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:13:59 localhost python3[75002]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:14:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:14:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:14:00 localhost systemd[1]: tmp-crun.i0BZhm.mount: Deactivated successfully. Feb 1 03:14:00 localhost podman[75053]: 2026-02-01 08:14:00.12231632 +0000 UTC m=+0.098951949 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510) Feb 1 03:14:00 localhost podman[75053]: 2026-02-01 08:14:00.178673279 +0000 UTC m=+0.155308908 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:14:00 localhost podman[75070]: 2026-02-01 08:14:00.195731473 +0000 UTC m=+0.072672820 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:14:00 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:14:00 localhost podman[75070]: 2026-02-01 08:14:00.217698714 +0000 UTC m=+0.094640071 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, tcib_managed=true, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, batch=17.1_20260112.1) Feb 1 03:14:00 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:14:01 localhost ansible-async_wrapper.py[75222]: Invoked with 203038787111 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933640.7088668-113062-14324234785392/AnsiballZ_command.py _ Feb 1 03:14:01 localhost ansible-async_wrapper.py[75225]: Starting module and watcher Feb 1 03:14:01 localhost ansible-async_wrapper.py[75225]: Start watching 75226 (3600) Feb 1 03:14:01 localhost ansible-async_wrapper.py[75226]: Start module (75226) Feb 1 03:14:01 localhost ansible-async_wrapper.py[75222]: Return async_wrapper task started. Feb 1 03:14:01 localhost python3[75246]: ansible-ansible.legacy.async_status Invoked with jid=203038787111.75222 mode=status _async_dir=/tmp/.ansible_async Feb 1 03:14:04 localhost puppet-user[75245]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 1 03:14:04 localhost puppet-user[75245]: (file: /etc/puppet/hiera.yaml) Feb 1 03:14:04 localhost puppet-user[75245]: Warning: Undefined variable '::deploy_config_name'; Feb 1 03:14:04 localhost puppet-user[75245]: (file & line not available) Feb 1 03:14:04 localhost puppet-user[75245]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 1 03:14:04 localhost puppet-user[75245]: (file & line not available) Feb 1 03:14:04 localhost puppet-user[75245]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 1 03:14:05 localhost puppet-user[75245]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:14:05 localhost puppet-user[75245]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:14:05 localhost puppet-user[75245]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:14:05 localhost puppet-user[75245]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:14:05 localhost puppet-user[75245]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:14:05 localhost puppet-user[75245]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:14:05 localhost puppet-user[75245]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:14:05 localhost puppet-user[75245]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:14:05 localhost puppet-user[75245]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:14:05 localhost puppet-user[75245]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:14:05 localhost puppet-user[75245]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:14:05 localhost puppet-user[75245]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:14:05 localhost puppet-user[75245]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:14:05 localhost puppet-user[75245]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:14:05 localhost puppet-user[75245]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 1 03:14:05 localhost puppet-user[75245]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 1 03:14:05 localhost puppet-user[75245]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 1 03:14:05 localhost puppet-user[75245]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 1 03:14:05 localhost puppet-user[75245]: Notice: Compiled catalog for np0005604215.localdomain in environment production in 0.22 seconds Feb 1 03:14:05 localhost puppet-user[75245]: Notice: Applied catalog in 0.23 seconds Feb 1 03:14:05 localhost puppet-user[75245]: Application: Feb 1 03:14:05 localhost puppet-user[75245]: Initial environment: production Feb 1 03:14:05 localhost puppet-user[75245]: Converged environment: production Feb 1 03:14:05 localhost puppet-user[75245]: Run mode: user Feb 1 03:14:05 localhost puppet-user[75245]: Changes: Feb 1 03:14:05 localhost puppet-user[75245]: Events: Feb 1 03:14:05 localhost puppet-user[75245]: Resources: Feb 1 03:14:05 localhost puppet-user[75245]: Total: 19 Feb 1 03:14:05 localhost puppet-user[75245]: Time: Feb 1 03:14:05 localhost puppet-user[75245]: Schedule: 0.00 Feb 1 03:14:05 localhost puppet-user[75245]: Package: 0.00 Feb 1 03:14:05 localhost puppet-user[75245]: Exec: 0.01 Feb 1 03:14:05 localhost puppet-user[75245]: Augeas: 0.01 Feb 1 03:14:05 localhost puppet-user[75245]: File: 0.02 Feb 1 03:14:05 localhost puppet-user[75245]: Service: 0.04 Feb 1 03:14:05 localhost puppet-user[75245]: Transaction evaluation: 0.22 Feb 1 03:14:05 localhost puppet-user[75245]: Catalog application: 0.23 Feb 1 03:14:05 localhost puppet-user[75245]: Config retrieval: 0.28 Feb 1 03:14:05 localhost puppet-user[75245]: Last run: 1769933645 Feb 1 03:14:05 localhost puppet-user[75245]: Filebucket: 0.00 Feb 1 03:14:05 localhost puppet-user[75245]: Total: 0.23 Feb 1 03:14:05 localhost puppet-user[75245]: Version: Feb 1 03:14:05 localhost puppet-user[75245]: Config: 1769933644 Feb 1 03:14:05 localhost puppet-user[75245]: Puppet: 7.10.0 Feb 1 03:14:05 localhost ansible-async_wrapper.py[75226]: Module complete (75226) Feb 1 03:14:06 localhost ansible-async_wrapper.py[75225]: Done in kid B. Feb 1 03:14:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:14:10 localhost recover_tripleo_nova_virtqemud[75370]: 62016 Feb 1 03:14:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:14:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:14:11 localhost python3[75386]: ansible-ansible.legacy.async_status Invoked with jid=203038787111.75222 mode=status _async_dir=/tmp/.ansible_async Feb 1 03:14:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:14:12 localhost podman[75403]: 2026-02-01 08:14:12.6043233 +0000 UTC m=+0.084504187 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1766032510, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr) Feb 1 03:14:12 localhost python3[75402]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 03:14:12 localhost podman[75403]: 2026-02-01 08:14:12.76852915 +0000 UTC m=+0.248709967 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:14:12 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:14:13 localhost python3[75446]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:14:13 localhost python3[75496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:14:13 localhost python3[75514]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmptozshck9 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 1 03:14:14 localhost python3[75544]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:14:15 localhost python3[75649]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 1 03:14:16 localhost python3[75668]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:14:16 localhost python3[75700]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:14:17 localhost python3[75750]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:14:17 localhost python3[75768]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:14:18 localhost python3[75830]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:14:18 localhost python3[75848]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:14:19 localhost python3[75910]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:14:19 localhost python3[75928]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:14:19 localhost python3[75990]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:14:20 localhost python3[76008]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:14:20 localhost python3[76038]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:14:20 localhost systemd[1]: Reloading. Feb 1 03:14:20 localhost systemd-rc-local-generator[76061]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:14:20 localhost systemd-sysv-generator[76065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:14:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:14:21 localhost python3[76124]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:14:21 localhost python3[76142]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:14:22 localhost python3[76204]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 1 03:14:22 localhost python3[76222]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:14:23 localhost python3[76252]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:14:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:14:23 localhost systemd[1]: Reloading. Feb 1 03:14:23 localhost systemd-rc-local-generator[76288]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:14:23 localhost systemd-sysv-generator[76294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:14:23 localhost podman[76254]: 2026-02-01 08:14:23.200459242 +0000 UTC m=+0.110745695 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, tcib_managed=true, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:14:23 localhost podman[76254]: 2026-02-01 08:14:23.214568982 +0000 UTC m=+0.124855405 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible) Feb 1 03:14:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:14:23 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:14:23 localhost systemd[1]: Starting Create netns directory... Feb 1 03:14:23 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 03:14:23 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 03:14:23 localhost systemd[1]: Finished Create netns directory. Feb 1 03:14:24 localhost python3[76327]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 1 03:14:26 localhost python3[76385]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 1 03:14:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:14:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:14:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:14:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:14:26 localhost podman[76394]: 2026-02-01 08:14:26.202980687 +0000 UTC m=+0.085309463 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, io.buildah.version=1.41.5) Feb 1 03:14:26 localhost podman[76388]: 2026-02-01 08:14:26.26197977 +0000 UTC m=+0.156094463 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=) Feb 1 03:14:26 localhost podman[76388]: 2026-02-01 08:14:26.297313998 +0000 UTC m=+0.191428651 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git) Feb 1 03:14:26 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:14:26 localhost podman[76390]: 2026-02-01 08:14:26.319899288 +0000 UTC m=+0.208634200 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:14:26 localhost podman[76390]: 2026-02-01 08:14:26.352499738 +0000 UTC m=+0.241234640 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:14:26 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:14:26 localhost podman[76389]: 2026-02-01 08:14:26.375848884 +0000 UTC m=+0.267799268 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 1 03:14:26 localhost podman[76394]: 2026-02-01 08:14:26.386452083 +0000 UTC m=+0.268780919 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi) Feb 1 03:14:26 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:14:26 localhost podman[76389]: 2026-02-01 08:14:26.413522916 +0000 UTC m=+0.305473220 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.buildah.version=1.41.5, container_name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:14:26 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:14:26 localhost podman[76508]: 2026-02-01 08:14:26.574391661 +0000 UTC m=+0.082123323 container create 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Feb 1 03:14:26 localhost systemd[1]: Started libpod-conmon-1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.scope. Feb 1 03:14:26 localhost podman[76508]: 2026-02-01 08:14:26.535136948 +0000 UTC m=+0.042868580 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 1 03:14:26 localhost systemd[1]: Started libcrun container. Feb 1 03:14:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66409f2cae0cc3fdf46266cf7a9b4ef7f2208d64cf24e912c16b5d672be00b92/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 1 03:14:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66409f2cae0cc3fdf46266cf7a9b4ef7f2208d64cf24e912c16b5d672be00b92/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 03:14:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66409f2cae0cc3fdf46266cf7a9b4ef7f2208d64cf24e912c16b5d672be00b92/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:14:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66409f2cae0cc3fdf46266cf7a9b4ef7f2208d64cf24e912c16b5d672be00b92/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:14:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66409f2cae0cc3fdf46266cf7a9b4ef7f2208d64cf24e912c16b5d672be00b92/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 1 03:14:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:14:26 localhost podman[76508]: 2026-02-01 08:14:26.678899816 +0000 UTC m=+0.186631468 container init 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1766032510, io.buildah.version=1.41.5) Feb 1 03:14:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:14:26 localhost podman[76508]: 2026-02-01 08:14:26.722484526 +0000 UTC m=+0.230216178 container start 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, release=1766032510, version=17.1.13, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public) Feb 1 03:14:26 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring. Feb 1 03:14:26 localhost python3[76385]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 1 03:14:26 localhost systemd[1]: Created slice User Slice of UID 0. Feb 1 03:14:26 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 1 03:14:26 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 1 03:14:26 localhost systemd[1]: Starting User Manager for UID 0... Feb 1 03:14:26 localhost podman[76530]: 2026-02-01 08:14:26.826666902 +0000 UTC m=+0.094530148 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_compute, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:14:26 localhost podman[76530]: 2026-02-01 08:14:26.888754643 +0000 UTC m=+0.156617969 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute) Feb 1 03:14:26 localhost podman[76530]: unhealthy Feb 1 03:14:26 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:14:26 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:14:26 localhost systemd[76547]: Queued start job for default target Main User Target. Feb 1 03:14:26 localhost systemd[76547]: Created slice User Application Slice. Feb 1 03:14:26 localhost systemd[76547]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 1 03:14:26 localhost systemd[76547]: Started Daily Cleanup of User's Temporary Directories. Feb 1 03:14:26 localhost systemd[76547]: Reached target Paths. Feb 1 03:14:26 localhost systemd[76547]: Reached target Timers. Feb 1 03:14:26 localhost systemd[76547]: Starting D-Bus User Message Bus Socket... Feb 1 03:14:26 localhost systemd[76547]: Starting Create User's Volatile Files and Directories... Feb 1 03:14:26 localhost systemd[76547]: Listening on D-Bus User Message Bus Socket. Feb 1 03:14:26 localhost systemd[76547]: Reached target Sockets. Feb 1 03:14:26 localhost systemd[76547]: Finished Create User's Volatile Files and Directories. Feb 1 03:14:26 localhost systemd[76547]: Reached target Basic System. Feb 1 03:14:26 localhost systemd[76547]: Reached target Main User Target. Feb 1 03:14:26 localhost systemd[76547]: Startup finished in 138ms. Feb 1 03:14:26 localhost systemd[1]: Started User Manager for UID 0. Feb 1 03:14:26 localhost systemd[1]: Started Session c10 of User root. Feb 1 03:14:27 localhost systemd[1]: session-c10.scope: Deactivated successfully. Feb 1 03:14:27 localhost podman[76629]: 2026-02-01 08:14:27.241569553 +0000 UTC m=+0.077995940 container create 09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, architecture=x86_64) Feb 1 03:14:27 localhost systemd[1]: Started libpod-conmon-09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1.scope. Feb 1 03:14:27 localhost podman[76629]: 2026-02-01 08:14:27.198384144 +0000 UTC m=+0.034810511 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 1 03:14:27 localhost systemd[1]: Started libcrun container. Feb 1 03:14:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7de42bd2ef28ab6d43ca2881ed0bac026c1f46d7bf355b9a366b5c9ec93a4c0/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Feb 1 03:14:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7de42bd2ef28ab6d43ca2881ed0bac026c1f46d7bf355b9a366b5c9ec93a4c0/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Feb 1 03:14:27 localhost podman[76629]: 2026-02-01 08:14:27.325240074 +0000 UTC m=+0.161666461 container init 09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=nova_wait_for_compute_service, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:14:27 localhost podman[76629]: 2026-02-01 08:14:27.341096179 +0000 UTC m=+0.177522566 container start 09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, container_name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 1 03:14:27 localhost podman[76629]: 2026-02-01 08:14:27.341521613 +0000 UTC m=+0.177948000 container attach 09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1766032510, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Feb 1 03:14:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:14:28 localhost podman[76652]: 2026-02-01 08:14:28.870819259 +0000 UTC m=+0.084993833 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:14:29 localhost podman[76652]: 2026-02-01 08:14:29.238715071 +0000 UTC m=+0.452889645 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:14:29 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:14:30 localhost systemd[1]: tmp-crun.xB27rr.mount: Deactivated successfully. Feb 1 03:14:30 localhost podman[76676]: 2026-02-01 08:14:30.883110742 +0000 UTC m=+0.097376479 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:14:30 localhost systemd[1]: tmp-crun.O9YqNO.mount: Deactivated successfully. Feb 1 03:14:30 localhost podman[76677]: 2026-02-01 08:14:30.937454276 +0000 UTC m=+0.149676867 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 1 03:14:30 localhost podman[76676]: 2026-02-01 08:14:30.944678737 +0000 UTC m=+0.158944474 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 1 03:14:30 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:14:30 localhost podman[76677]: 2026-02-01 08:14:30.973785976 +0000 UTC m=+0.186008577 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13) Feb 1 03:14:30 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:14:37 localhost systemd[1]: Stopping User Manager for UID 0... Feb 1 03:14:37 localhost systemd[76547]: Activating special unit Exit the Session... Feb 1 03:14:37 localhost systemd[76547]: Stopped target Main User Target. Feb 1 03:14:37 localhost systemd[76547]: Stopped target Basic System. Feb 1 03:14:37 localhost systemd[76547]: Stopped target Paths. Feb 1 03:14:37 localhost systemd[76547]: Stopped target Sockets. Feb 1 03:14:37 localhost systemd[76547]: Stopped target Timers. Feb 1 03:14:37 localhost systemd[76547]: Stopped Daily Cleanup of User's Temporary Directories. Feb 1 03:14:37 localhost systemd[76547]: Closed D-Bus User Message Bus Socket. Feb 1 03:14:37 localhost systemd[76547]: Stopped Create User's Volatile Files and Directories. Feb 1 03:14:37 localhost systemd[76547]: Removed slice User Application Slice. Feb 1 03:14:37 localhost systemd[76547]: Reached target Shutdown. Feb 1 03:14:37 localhost systemd[76547]: Finished Exit the Session. Feb 1 03:14:37 localhost systemd[76547]: Reached target Exit the Session. Feb 1 03:14:37 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 1 03:14:37 localhost systemd[1]: Stopped User Manager for UID 0. Feb 1 03:14:37 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 1 03:14:37 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 1 03:14:37 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 1 03:14:37 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 1 03:14:37 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 1 03:14:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:14:43 localhost podman[76724]: 2026-02-01 08:14:43.862129264 +0000 UTC m=+0.076645788 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, config_id=tripleo_step1, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 1 03:14:44 localhost podman[76724]: 2026-02-01 08:14:44.08983513 +0000 UTC m=+0.304351734 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z) Feb 1 03:14:44 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:14:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:14:53 localhost podman[76831]: 2026-02-01 08:14:53.891544329 +0000 UTC m=+0.101945196 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:14:53 localhost podman[76831]: 2026-02-01 08:14:53.908637784 +0000 UTC m=+0.119038661 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, distribution-scope=public, url=https://www.redhat.com, container_name=collectd) Feb 1 03:14:53 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:14:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:14:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:14:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:14:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:14:56 localhost systemd[1]: tmp-crun.XpUcTi.mount: Deactivated successfully. Feb 1 03:14:56 localhost podman[76851]: 2026-02-01 08:14:56.868638211 +0000 UTC m=+0.083244688 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron) Feb 1 03:14:56 localhost podman[76853]: 2026-02-01 08:14:56.924617968 +0000 UTC m=+0.132954045 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 1 03:14:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:14:56 localhost podman[76854]: 2026-02-01 08:14:56.892987708 +0000 UTC m=+0.098883237 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Feb 1 03:14:56 localhost podman[76851]: 2026-02-01 08:14:56.953556311 +0000 UTC m=+0.168162728 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20260112.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:14:56 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:14:56 localhost podman[76854]: 2026-02-01 08:14:56.979589282 +0000 UTC m=+0.185484791 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.) Feb 1 03:14:56 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:14:57 localhost podman[76853]: 2026-02-01 08:14:57.009515997 +0000 UTC m=+0.217852074 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.openshift.expose-services=) Feb 1 03:14:57 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:14:57 localhost podman[76924]: 2026-02-01 08:14:57.080944086 +0000 UTC m=+0.139311356 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=nova_compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:14:57 localhost podman[76852]: 2026-02-01 08:14:57.136002264 +0000 UTC m=+0.344247428 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, release=1766032510, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, container_name=iscsid, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:14:57 localhost podman[76924]: 2026-02-01 08:14:57.140670163 +0000 UTC m=+0.199037433 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:14:57 localhost podman[76924]: unhealthy Feb 1 03:14:57 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:14:57 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:14:57 localhost podman[76852]: 2026-02-01 08:14:57.169566385 +0000 UTC m=+0.377811499 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, tcib_managed=true, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible) Feb 1 03:14:57 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:14:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:14:59 localhost podman[76963]: 2026-02-01 08:14:59.851754766 +0000 UTC m=+0.067663520 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:15:00 localhost podman[76963]: 2026-02-01 08:15:00.172629787 +0000 UTC m=+0.388538501 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:15:00 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:15:01 localhost podman[76986]: 2026-02-01 08:15:01.875835205 +0000 UTC m=+0.086647227 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, version=17.1.13, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 1 03:15:01 localhost podman[76986]: 2026-02-01 08:15:01.931246483 +0000 UTC m=+0.142058525 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true) Feb 1 03:15:01 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:15:01 localhost podman[76987]: 2026-02-01 08:15:01.93272721 +0000 UTC m=+0.141552498 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, version=17.1.13, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Feb 1 03:15:02 localhost podman[76987]: 2026-02-01 08:15:02.011585627 +0000 UTC m=+0.220410895 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true) Feb 1 03:15:02 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:15:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:15:14 localhost podman[77034]: 2026-02-01 08:15:14.866138118 +0000 UTC m=+0.082875597 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 1 03:15:15 localhost podman[77034]: 2026-02-01 08:15:15.054953763 +0000 UTC m=+0.271691262 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, container_name=metrics_qdr) Feb 1 03:15:15 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:15:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:15:24 localhost podman[77064]: 2026-02-01 08:15:24.848186321 +0000 UTC m=+0.067193176 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, container_name=collectd, managed_by=tripleo_ansible, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:15:24 localhost podman[77064]: 2026-02-01 08:15:24.860840415 +0000 UTC m=+0.079847290 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, build-date=2026-01-12T22:10:15Z, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:15:24 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:15:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:15:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:15:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:15:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:15:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:15:27 localhost systemd[1]: tmp-crun.Dl3v5J.mount: Deactivated successfully. Feb 1 03:15:27 localhost podman[77088]: 2026-02-01 08:15:27.883228013 +0000 UTC m=+0.094242600 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, container_name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:15:27 localhost podman[77086]: 2026-02-01 08:15:27.913668034 +0000 UTC m=+0.127665836 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, version=17.1.13, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.openshift.expose-services=) Feb 1 03:15:27 localhost podman[77088]: 2026-02-01 08:15:27.9217102 +0000 UTC m=+0.132724777 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com) Feb 1 03:15:27 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:15:27 localhost podman[77090]: 2026-02-01 08:15:27.983843854 +0000 UTC m=+0.193876780 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:15:28 localhost podman[77090]: 2026-02-01 08:15:28.012433166 +0000 UTC m=+0.222466042 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z) Feb 1 03:15:28 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:15:28 localhost podman[77086]: 2026-02-01 08:15:28.051705169 +0000 UTC m=+0.265702991 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, managed_by=tripleo_ansible) Feb 1 03:15:28 localhost podman[77087]: 2026-02-01 08:15:27.848620448 +0000 UTC m=+0.062293200 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:15:28 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:15:28 localhost podman[77089]: 2026-02-01 08:15:28.100396303 +0000 UTC m=+0.311205363 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 1 03:15:28 localhost podman[77089]: 2026-02-01 08:15:28.134723968 +0000 UTC m=+0.345533058 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:15:28 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:15:28 localhost podman[77087]: 2026-02-01 08:15:28.184788306 +0000 UTC m=+0.398461138 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., container_name=nova_compute, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13) Feb 1 03:15:28 localhost podman[77087]: unhealthy Feb 1 03:15:28 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:15:28 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:15:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:15:30 localhost podman[77195]: 2026-02-01 08:15:30.867705551 +0000 UTC m=+0.084296021 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1) Feb 1 03:15:31 localhost podman[77195]: 2026-02-01 08:15:31.197792876 +0000 UTC m=+0.414383386 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.openshift.expose-services=, version=17.1.13, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:15:31 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:15:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:15:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:15:32 localhost systemd[1]: tmp-crun.xV0C4X.mount: Deactivated successfully. Feb 1 03:15:32 localhost podman[77218]: 2026-02-01 08:15:32.873406732 +0000 UTC m=+0.086167161 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, tcib_managed=true, architecture=x86_64) Feb 1 03:15:32 localhost podman[77218]: 2026-02-01 08:15:32.917095987 +0000 UTC m=+0.129856366 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:15:32 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:15:32 localhost podman[77219]: 2026-02-01 08:15:32.926888309 +0000 UTC m=+0.136012181 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, release=1766032510, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller) Feb 1 03:15:32 localhost podman[77219]: 2026-02-01 08:15:32.955701618 +0000 UTC m=+0.164825510 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510) Feb 1 03:15:32 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:15:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:15:45 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:15:45 localhost recover_tripleo_nova_virtqemud[77275]: 62016 Feb 1 03:15:45 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:15:45 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:15:45 localhost podman[77268]: 2026-02-01 08:15:45.858710475 +0000 UTC m=+0.075840962 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, version=17.1.13, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 03:15:46 localhost podman[77268]: 2026-02-01 08:15:46.029386412 +0000 UTC m=+0.246516929 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step1, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 03:15:46 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:15:50 localhost systemd[1]: session-27.scope: Deactivated successfully. Feb 1 03:15:50 localhost systemd[1]: session-27.scope: Consumed 2.982s CPU time. Feb 1 03:15:50 localhost systemd-logind[761]: Session 27 logged out. Waiting for processes to exit. Feb 1 03:15:50 localhost systemd-logind[761]: Removed session 27. Feb 1 03:15:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:15:55 localhost systemd[1]: tmp-crun.wVR3xq.mount: Deactivated successfully. Feb 1 03:15:55 localhost podman[77379]: 2026-02-01 08:15:55.899852944 +0000 UTC m=+0.108195735 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:15:55 localhost podman[77379]: 2026-02-01 08:15:55.912072383 +0000 UTC m=+0.120415174 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Feb 1 03:15:55 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:15:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:15:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:15:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:15:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:15:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:15:58 localhost podman[77401]: 2026-02-01 08:15:58.868955592 +0000 UTC m=+0.080992096 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true) Feb 1 03:15:58 localhost systemd[1]: tmp-crun.0wuDNM.mount: Deactivated successfully. Feb 1 03:15:58 localhost podman[77413]: 2026-02-01 08:15:58.888166874 +0000 UTC m=+0.087559455 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, container_name=ceilometer_agent_ipmi, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Feb 1 03:15:58 localhost podman[77401]: 2026-02-01 08:15:58.921731465 +0000 UTC m=+0.133768009 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute) Feb 1 03:15:58 localhost systemd[1]: tmp-crun.fUJiw5.mount: Deactivated successfully. Feb 1 03:15:58 localhost podman[77401]: unhealthy Feb 1 03:15:58 localhost podman[77402]: 2026-02-01 08:15:58.933791851 +0000 UTC m=+0.142368195 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, vcs-type=git, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:15:58 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:15:58 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:15:58 localhost podman[77413]: 2026-02-01 08:15:58.971622297 +0000 UTC m=+0.171014818 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:15:58 localhost podman[77402]: 2026-02-01 08:15:58.971911147 +0000 UTC m=+0.180487491 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:15:58 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:15:59 localhost podman[77400]: 2026-02-01 08:15:58.976341808 +0000 UTC m=+0.191752501 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5) Feb 1 03:15:59 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:15:59 localhost podman[77404]: 2026-02-01 08:15:59.038944296 +0000 UTC m=+0.240101484 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, build-date=2026-01-12T23:07:47Z, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 1 03:15:59 localhost podman[77400]: 2026-02-01 08:15:59.05973147 +0000 UTC m=+0.275142183 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc.) Feb 1 03:15:59 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:15:59 localhost podman[77404]: 2026-02-01 08:15:59.07260074 +0000 UTC m=+0.273757888 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, vcs-type=git) Feb 1 03:15:59 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:16:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:16:01 localhost podman[77516]: 2026-02-01 08:16:01.848139421 +0000 UTC m=+0.066977389 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, tcib_managed=true, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:16:02 localhost podman[77516]: 2026-02-01 08:16:02.222750247 +0000 UTC m=+0.441588205 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.expose-services=, vcs-type=git, container_name=nova_migration_target, config_id=tripleo_step4, managed_by=tripleo_ansible) Feb 1 03:16:02 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:16:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:16:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:16:03 localhost podman[77541]: 2026-02-01 08:16:03.878169989 +0000 UTC m=+0.089972172 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, release=1766032510) Feb 1 03:16:03 localhost podman[77541]: 2026-02-01 08:16:03.928562838 +0000 UTC m=+0.140365011 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, release=1766032510, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:16:03 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:16:03 localhost podman[77542]: 2026-02-01 08:16:03.932247735 +0000 UTC m=+0.141133175 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, version=17.1.13) Feb 1 03:16:04 localhost podman[77542]: 2026-02-01 08:16:04.017692601 +0000 UTC m=+0.226578011 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1) Feb 1 03:16:04 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:16:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:16:16 localhost podman[77590]: 2026-02-01 08:16:16.868880056 +0000 UTC m=+0.082324729 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, distribution-scope=public) Feb 1 03:16:17 localhost podman[77590]: 2026-02-01 08:16:17.068732084 +0000 UTC m=+0.282176807 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public) Feb 1 03:16:17 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:16:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:16:26 localhost podman[77619]: 2026-02-01 08:16:26.867988253 +0000 UTC m=+0.085574221 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, io.buildah.version=1.41.5) Feb 1 03:16:26 localhost podman[77619]: 2026-02-01 08:16:26.882816046 +0000 UTC m=+0.100401994 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, vcs-type=git, build-date=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=collectd) Feb 1 03:16:26 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:16:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:16:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:16:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:16:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:16:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:16:29 localhost podman[77640]: 2026-02-01 08:16:29.885570718 +0000 UTC m=+0.098227676 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, release=1766032510, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 1 03:16:29 localhost podman[77640]: 2026-02-01 08:16:29.894345888 +0000 UTC m=+0.107002856 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, container_name=logrotate_crond, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, tcib_managed=true) Feb 1 03:16:29 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:16:29 localhost systemd[1]: tmp-crun.KifgIF.mount: Deactivated successfully. Feb 1 03:16:29 localhost podman[77641]: 2026-02-01 08:16:29.946769261 +0000 UTC m=+0.155869805 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, config_id=tripleo_step5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:16:29 localhost podman[77643]: 2026-02-01 08:16:29.998120351 +0000 UTC m=+0.194931433 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, version=17.1.13, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 1 03:16:30 localhost podman[77641]: 2026-02-01 08:16:30.033927233 +0000 UTC m=+0.243027787 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, container_name=nova_compute, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:16:30 localhost podman[77641]: unhealthy Feb 1 03:16:30 localhost podman[77650]: 2026-02-01 08:16:30.04479615 +0000 UTC m=+0.241289142 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:16:30 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:16:30 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:16:30 localhost podman[77650]: 2026-02-01 08:16:30.074575751 +0000 UTC m=+0.271068753 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:16:30 localhost podman[77642]: 2026-02-01 08:16:30.085118767 +0000 UTC m=+0.292124645 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, release=1766032510, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:16:30 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:16:30 localhost podman[77643]: 2026-02-01 08:16:30.106953773 +0000 UTC m=+0.303764835 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true) Feb 1 03:16:30 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:16:30 localhost podman[77642]: 2026-02-01 08:16:30.120904079 +0000 UTC m=+0.327909957 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, distribution-scope=public, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, container_name=iscsid, release=1766032510, maintainer=OpenStack TripleO Team) Feb 1 03:16:30 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:16:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:16:32 localhost podman[77754]: 2026-02-01 08:16:32.878840878 +0000 UTC m=+0.091976967 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1) Feb 1 03:16:33 localhost podman[77754]: 2026-02-01 08:16:33.250544791 +0000 UTC m=+0.463680860 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:16:33 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:16:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:16:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:16:34 localhost podman[77778]: 2026-02-01 08:16:34.862528457 +0000 UTC m=+0.073618030 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:16:34 localhost podman[77778]: 2026-02-01 08:16:34.903551956 +0000 UTC m=+0.114641489 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, release=1766032510, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git) Feb 1 03:16:34 localhost podman[77779]: 2026-02-01 08:16:34.916362525 +0000 UTC m=+0.123952577 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:16:34 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:16:34 localhost podman[77779]: 2026-02-01 08:16:34.938697767 +0000 UTC m=+0.146287849 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com) Feb 1 03:16:34 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:16:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:16:47 localhost podman[77827]: 2026-02-01 08:16:47.875707047 +0000 UTC m=+0.084756226 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, version=17.1.13, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:16:48 localhost podman[77827]: 2026-02-01 08:16:48.091138093 +0000 UTC m=+0.300187232 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, io.openshift.expose-services=, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, distribution-scope=public, architecture=x86_64) Feb 1 03:16:48 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:16:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:16:57 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:16:57 localhost recover_tripleo_nova_virtqemud[77938]: 62016 Feb 1 03:16:57 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:16:57 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:16:57 localhost podman[77933]: 2026-02-01 08:16:57.893266623 +0000 UTC m=+0.104344991 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, release=1766032510, tcib_managed=true, container_name=collectd, batch=17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 1 03:16:57 localhost podman[77933]: 2026-02-01 08:16:57.931514914 +0000 UTC m=+0.142593292 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, version=17.1.13, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Feb 1 03:16:57 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:17:00 localhost podman[77957]: 2026-02-01 08:17:00.870792099 +0000 UTC m=+0.083721403 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Feb 1 03:17:00 localhost systemd[1]: tmp-crun.jwdHRO.mount: Deactivated successfully. Feb 1 03:17:00 localhost podman[77957]: 2026-02-01 08:17:00.922765318 +0000 UTC m=+0.135694602 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, container_name=nova_compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 1 03:17:00 localhost podman[77957]: unhealthy Feb 1 03:17:00 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:17:00 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:17:00 localhost podman[77956]: 2026-02-01 08:17:00.925671241 +0000 UTC m=+0.139459192 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, container_name=logrotate_crond, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, release=1766032510) Feb 1 03:17:01 localhost podman[77959]: 2026-02-01 08:17:00.977642209 +0000 UTC m=+0.183495326 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, version=17.1.13) Feb 1 03:17:01 localhost podman[77958]: 2026-02-01 08:17:01.037570422 +0000 UTC m=+0.244188485 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Feb 1 03:17:01 localhost podman[77958]: 2026-02-01 08:17:01.051652012 +0000 UTC m=+0.258270065 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step3, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:17:01 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:17:01 localhost podman[77956]: 2026-02-01 08:17:01.106210253 +0000 UTC m=+0.319998244 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, architecture=x86_64, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z) Feb 1 03:17:01 localhost podman[77960]: 2026-02-01 08:17:01.140272189 +0000 UTC m=+0.342660126 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:17:01 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:17:01 localhost podman[77960]: 2026-02-01 08:17:01.171982501 +0000 UTC m=+0.374370458 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13) Feb 1 03:17:01 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:17:01 localhost podman[77959]: 2026-02-01 08:17:01.208958932 +0000 UTC m=+0.414812059 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 1 03:17:01 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:17:03 localhost podman[78064]: 2026-02-01 08:17:03.869412239 +0000 UTC m=+0.088264117 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1766032510, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:17:04 localhost podman[78064]: 2026-02-01 08:17:04.240902706 +0000 UTC m=+0.459754604 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, url=https://www.redhat.com, tcib_managed=true) Feb 1 03:17:04 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:17:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:17:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:17:05 localhost systemd[1]: tmp-crun.m8FDND.mount: Deactivated successfully. Feb 1 03:17:05 localhost podman[78088]: 2026-02-01 08:17:05.885489982 +0000 UTC m=+0.094761975 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:17:05 localhost podman[78088]: 2026-02-01 08:17:05.936652335 +0000 UTC m=+0.145924298 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, managed_by=tripleo_ansible) Feb 1 03:17:05 localhost podman[78087]: 2026-02-01 08:17:05.935604231 +0000 UTC m=+0.146555197 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, container_name=ovn_metadata_agent, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:17:05 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:17:06 localhost podman[78087]: 2026-02-01 08:17:06.020851412 +0000 UTC m=+0.231802398 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, version=17.1.13, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:17:06 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:17:06 localhost systemd[1]: tmp-crun.iC6rIL.mount: Deactivated successfully. Feb 1 03:17:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:17:18 localhost podman[78134]: 2026-02-01 08:17:18.875974369 +0000 UTC m=+0.087708170 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git) Feb 1 03:17:19 localhost podman[78134]: 2026-02-01 08:17:19.068399501 +0000 UTC m=+0.280133292 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5) Feb 1 03:17:19 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:17:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:17:28 localhost podman[78164]: 2026-02-01 08:17:28.850535233 +0000 UTC m=+0.069040625 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., container_name=collectd, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, version=17.1.13, io.buildah.version=1.41.5) Feb 1 03:17:28 localhost podman[78164]: 2026-02-01 08:17:28.861266026 +0000 UTC m=+0.079771418 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, release=1766032510, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z) Feb 1 03:17:28 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:17:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:17:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:17:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:17:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:17:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:17:31 localhost podman[78257]: 2026-02-01 08:17:31.879682888 +0000 UTC m=+0.092971598 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4) Feb 1 03:17:31 localhost podman[78253]: 2026-02-01 08:17:31.92207647 +0000 UTC m=+0.137515259 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.13, com.redhat.component=openstack-cron-container, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, distribution-scope=public, config_id=tripleo_step4) Feb 1 03:17:31 localhost podman[78253]: 2026-02-01 08:17:31.933728902 +0000 UTC m=+0.149167751 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:17:31 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:17:31 localhost podman[78256]: 2026-02-01 08:17:31.974031569 +0000 UTC m=+0.184641174 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 1 03:17:31 localhost podman[78257]: 2026-02-01 08:17:31.984137131 +0000 UTC m=+0.197425821 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4) Feb 1 03:17:31 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:17:32 localhost podman[78255]: 2026-02-01 08:17:32.031572635 +0000 UTC m=+0.242073976 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com) Feb 1 03:17:32 localhost podman[78256]: 2026-02-01 08:17:32.052026688 +0000 UTC m=+0.262636283 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z) Feb 1 03:17:32 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:17:32 localhost podman[78255]: 2026-02-01 08:17:32.066866602 +0000 UTC m=+0.277367993 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, container_name=iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:17:32 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:17:32 localhost podman[78254]: 2026-02-01 08:17:32.139163989 +0000 UTC m=+0.355147655 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, maintainer=OpenStack TripleO Team) Feb 1 03:17:32 localhost podman[78254]: 2026-02-01 08:17:32.174734964 +0000 UTC m=+0.390718660 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, managed_by=tripleo_ansible) Feb 1 03:17:32 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:17:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:17:34 localhost podman[78388]: 2026-02-01 08:17:34.866254013 +0000 UTC m=+0.081331406 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, version=17.1.13, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:17:35 localhost podman[78388]: 2026-02-01 08:17:35.2587659 +0000 UTC m=+0.473843243 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:17:35 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:17:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:17:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:17:36 localhost podman[78411]: 2026-02-01 08:17:36.873606758 +0000 UTC m=+0.090778629 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 1 03:17:36 localhost systemd[1]: tmp-crun.Jat3ay.mount: Deactivated successfully. Feb 1 03:17:36 localhost podman[78412]: 2026-02-01 08:17:36.927625282 +0000 UTC m=+0.142387526 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git) Feb 1 03:17:36 localhost podman[78412]: 2026-02-01 08:17:36.956609706 +0000 UTC m=+0.171372010 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, container_name=ovn_controller, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5) Feb 1 03:17:36 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:17:37 localhost podman[78411]: 2026-02-01 08:17:37.007098718 +0000 UTC m=+0.224270549 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:17:37 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:17:39 localhost systemd[1]: libpod-09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1.scope: Deactivated successfully. Feb 1 03:17:39 localhost podman[78458]: 2026-02-01 08:17:39.832564111 +0000 UTC m=+0.060774081 container died 09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T23:32:04Z, container_name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5) Feb 1 03:17:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1-userdata-shm.mount: Deactivated successfully. Feb 1 03:17:39 localhost systemd[1]: var-lib-containers-storage-overlay-f7de42bd2ef28ab6d43ca2881ed0bac026c1f46d7bf355b9a366b5c9ec93a4c0-merged.mount: Deactivated successfully. Feb 1 03:17:39 localhost podman[78458]: 2026-02-01 08:17:39.863682324 +0000 UTC m=+0.091892264 container cleanup 09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, vcs-type=git, container_name=nova_wait_for_compute_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.openshift.expose-services=) Feb 1 03:17:39 localhost systemd[1]: libpod-conmon-09302786a32de733d4c559cf633da3e5eeab3ee808df8ed54a333145ffb0dfe1.scope: Deactivated successfully. Feb 1 03:17:39 localhost python3[76385]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=1296029e90a465a2201c8dc6f8be17e7 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 1 03:17:40 localhost python3[78514]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:17:40 localhost python3[78530]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 1 03:17:41 localhost python3[78591]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1769933860.7820203-117803-166392810069579/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:17:41 localhost python3[78607]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 03:17:41 localhost systemd[1]: Reloading. Feb 1 03:17:41 localhost systemd-rc-local-generator[78633]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:17:41 localhost systemd-sysv-generator[78636]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:17:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:17:42 localhost python3[78659]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:17:43 localhost systemd[1]: Reloading. Feb 1 03:17:44 localhost systemd-rc-local-generator[78685]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:17:44 localhost systemd-sysv-generator[78688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:17:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:17:44 localhost systemd[1]: Starting nova_compute container... Feb 1 03:17:44 localhost tripleo-start-podman-container[78699]: Creating additional drop-in dependency for "nova_compute" (1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e) Feb 1 03:17:44 localhost systemd[1]: Reloading. Feb 1 03:17:44 localhost systemd-sysv-generator[78757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:17:44 localhost systemd-rc-local-generator[78753]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:17:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:17:44 localhost systemd[1]: Started nova_compute container. Feb 1 03:17:45 localhost python3[78798]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:17:46 localhost python3[78919]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005604215 step=5 update_config_hash_only=False Feb 1 03:17:47 localhost python3[78935]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 03:17:47 localhost python3[78951]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 1 03:17:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:17:49 localhost podman[78952]: 2026-02-01 08:17:49.894781774 +0000 UTC m=+0.104660082 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, release=1766032510, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, config_id=tripleo_step1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:17:50 localhost podman[78952]: 2026-02-01 08:17:50.13531629 +0000 UTC m=+0.345194558 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, version=17.1.13, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 03:17:50 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:17:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:17:59 localhost systemd[1]: tmp-crun.Yc6B22.mount: Deactivated successfully. Feb 1 03:17:59 localhost podman[79058]: 2026-02-01 08:17:59.897705643 +0000 UTC m=+0.107026246 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, version=17.1.13, release=1766032510, config_id=tripleo_step3, container_name=collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:17:59 localhost podman[79058]: 2026-02-01 08:17:59.936741719 +0000 UTC m=+0.146062322 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, version=17.1.13, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:17:59 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:18:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:18:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:18:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:18:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:18:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:18:02 localhost systemd[1]: tmp-crun.TJWvHj.mount: Deactivated successfully. Feb 1 03:18:02 localhost podman[79082]: 2026-02-01 08:18:02.873715032 +0000 UTC m=+0.074381055 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 1 03:18:02 localhost podman[79080]: 2026-02-01 08:18:02.918215362 +0000 UTC m=+0.124585638 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:18:02 localhost podman[79082]: 2026-02-01 08:18:02.924814723 +0000 UTC m=+0.125480756 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=ceilometer_agent_compute, version=17.1.13, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4) Feb 1 03:18:02 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:18:02 localhost podman[79083]: 2026-02-01 08:18:02.893517614 +0000 UTC m=+0.090052285 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5) Feb 1 03:18:02 localhost podman[79080]: 2026-02-01 08:18:02.972728882 +0000 UTC m=+0.179099228 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible) Feb 1 03:18:02 localhost podman[79079]: 2026-02-01 08:18:02.972608548 +0000 UTC m=+0.178150776 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:18:02 localhost podman[79083]: 2026-02-01 08:18:02.976648898 +0000 UTC m=+0.173183569 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step4) Feb 1 03:18:02 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:18:02 localhost podman[79079]: 2026-02-01 08:18:02.986399788 +0000 UTC m=+0.191942026 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_id=tripleo_step4) Feb 1 03:18:03 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:18:03 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:18:03 localhost podman[79081]: 2026-02-01 08:18:03.042471118 +0000 UTC m=+0.244731472 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team) Feb 1 03:18:03 localhost podman[79081]: 2026-02-01 08:18:03.078607371 +0000 UTC m=+0.280867725 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T22:34:43Z, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, release=1766032510, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container) Feb 1 03:18:03 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:18:03 localhost systemd[1]: tmp-crun.Wo5L35.mount: Deactivated successfully. Feb 1 03:18:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:18:05 localhost systemd[1]: tmp-crun.T9ukw3.mount: Deactivated successfully. Feb 1 03:18:05 localhost podman[79192]: 2026-02-01 08:18:05.915031224 +0000 UTC m=+0.135054112 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:18:06 localhost podman[79192]: 2026-02-01 08:18:06.272663738 +0000 UTC m=+0.492686646 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public) Feb 1 03:18:06 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:18:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:18:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:18:07 localhost podman[79217]: 2026-02-01 08:18:07.868928342 +0000 UTC m=+0.083376782 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, config_id=tripleo_step4, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, release=1766032510, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=) Feb 1 03:18:07 localhost podman[79218]: 2026-02-01 08:18:07.921273353 +0000 UTC m=+0.132324265 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller) Feb 1 03:18:07 localhost podman[79217]: 2026-02-01 08:18:07.942766738 +0000 UTC m=+0.157215178 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:18:07 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:18:07 localhost podman[79218]: 2026-02-01 08:18:07.973772378 +0000 UTC m=+0.184823290 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, version=17.1.13, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4) Feb 1 03:18:07 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:18:16 localhost sshd[79265]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:18:16 localhost systemd-logind[761]: New session 33 of user zuul. Feb 1 03:18:16 localhost systemd[1]: Started Session 33 of User zuul. Feb 1 03:18:17 localhost python3[79374]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 03:18:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:18:20 localhost systemd[1]: tmp-crun.ywv4MF.mount: Deactivated successfully. Feb 1 03:18:20 localhost podman[79562]: 2026-02-01 08:18:20.882467383 +0000 UTC m=+0.091402022 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, config_id=tripleo_step1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Feb 1 03:18:21 localhost podman[79562]: 2026-02-01 08:18:21.11443054 +0000 UTC m=+0.323365189 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true, build-date=2026-01-12T22:10:14Z, version=17.1.13, release=1766032510, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:18:21 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:18:24 localhost python3[79668]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Feb 1 03:18:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:18:28 localhost recover_tripleo_nova_virtqemud[79763]: 62016 Feb 1 03:18:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:18:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:18:28 localhost python3[79761]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Feb 1 03:18:28 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Feb 1 03:18:28 localhost systemd-journald[47940]: Field hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Feb 1 03:18:28 localhost systemd-journald[47940]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 03:18:28 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 03:18:28 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 03:18:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:18:30 localhost systemd[1]: tmp-crun.YJwner.mount: Deactivated successfully. Feb 1 03:18:30 localhost podman[79831]: 2026-02-01 08:18:30.886348987 +0000 UTC m=+0.098268287 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, url=https://www.redhat.com, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:18:30 localhost podman[79831]: 2026-02-01 08:18:30.894705488 +0000 UTC m=+0.106624788 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.openshift.expose-services=) Feb 1 03:18:30 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:18:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:18:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:18:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:18:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:18:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:18:33 localhost podman[79856]: 2026-02-01 08:18:33.880612877 +0000 UTC m=+0.083541167 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, release=1766032510, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13) Feb 1 03:18:33 localhost systemd[1]: tmp-crun.QaFoOZ.mount: Deactivated successfully. Feb 1 03:18:33 localhost podman[79855]: 2026-02-01 08:18:33.93391028 +0000 UTC m=+0.140502174 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:18:33 localhost podman[79856]: 2026-02-01 08:18:33.937864174 +0000 UTC m=+0.140792484 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, version=17.1.13, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:18:33 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:18:33 localhost podman[79855]: 2026-02-01 08:18:33.970654476 +0000 UTC m=+0.177246330 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:18:33 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:18:33 localhost podman[79854]: 2026-02-01 08:18:33.992534719 +0000 UTC m=+0.202122767 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, version=17.1.13, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., tcib_managed=true) Feb 1 03:18:34 localhost podman[79853]: 2026-02-01 08:18:34.038513044 +0000 UTC m=+0.249430303 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:18:34 localhost podman[79853]: 2026-02-01 08:18:34.044534202 +0000 UTC m=+0.255451451 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, url=https://www.redhat.com) Feb 1 03:18:34 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:18:34 localhost podman[79867]: 2026-02-01 08:18:34.092419006 +0000 UTC m=+0.289410411 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 1 03:18:34 localhost podman[79854]: 2026-02-01 08:18:34.098697201 +0000 UTC m=+0.308285179 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.expose-services=, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:18:34 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:18:34 localhost podman[79867]: 2026-02-01 08:18:34.116852668 +0000 UTC m=+0.313844113 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_ipmi) Feb 1 03:18:34 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:18:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:18:36 localhost podman[79971]: 2026-02-01 08:18:36.859375352 +0000 UTC m=+0.076360863 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Feb 1 03:18:37 localhost podman[79971]: 2026-02-01 08:18:37.236042335 +0000 UTC m=+0.453027876 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=) Feb 1 03:18:37 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:18:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:18:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:18:38 localhost podman[79994]: 2026-02-01 08:18:38.864401318 +0000 UTC m=+0.081317819 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 1 03:18:38 localhost podman[79994]: 2026-02-01 08:18:38.911313931 +0000 UTC m=+0.128230442 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 1 03:18:38 localhost podman[79995]: 2026-02-01 08:18:38.925191354 +0000 UTC m=+0.138086668 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 1 03:18:38 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:18:38 localhost podman[79995]: 2026-02-01 08:18:38.954769447 +0000 UTC m=+0.167664841 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.openshift.expose-services=) Feb 1 03:18:38 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:18:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:18:51 localhost podman[80041]: 2026-02-01 08:18:51.878633422 +0000 UTC m=+0.089944257 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_id=tripleo_step1, release=1766032510, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:18:52 localhost podman[80041]: 2026-02-01 08:18:52.099685209 +0000 UTC m=+0.310995954 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Feb 1 03:18:52 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:19:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:19:01 localhost podman[80148]: 2026-02-01 08:19:01.870109438 +0000 UTC m=+0.082144133 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., release=1766032510, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git, container_name=collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:19:01 localhost podman[80148]: 2026-02-01 08:19:01.881555687 +0000 UTC m=+0.093590312 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=collectd) Feb 1 03:19:01 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:19:04 localhost systemd[1]: tmp-crun.3cWqY4.mount: Deactivated successfully. Feb 1 03:19:04 localhost podman[80171]: 2026-02-01 08:19:04.890790932 +0000 UTC m=+0.086740897 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:19:04 localhost podman[80171]: 2026-02-01 08:19:04.93270775 +0000 UTC m=+0.128657665 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, version=17.1.13, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, vendor=Red Hat, Inc., container_name=iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:19:04 localhost podman[80172]: 2026-02-01 08:19:04.94555017 +0000 UTC m=+0.138597785 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:19:04 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:19:04 localhost podman[80172]: 2026-02-01 08:19:04.980689877 +0000 UTC m=+0.173737492 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.13, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 1 03:19:05 localhost podman[80169]: 2026-02-01 08:19:05.0074066 +0000 UTC m=+0.210580521 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, container_name=logrotate_crond, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:19:05 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:19:05 localhost podman[80178]: 2026-02-01 08:19:05.053357444 +0000 UTC m=+0.245221842 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, build-date=2026-01-12T23:07:30Z) Feb 1 03:19:05 localhost podman[80169]: 2026-02-01 08:19:05.06989418 +0000 UTC m=+0.273068101 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, release=1766032510, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step4) Feb 1 03:19:05 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:19:05 localhost podman[80178]: 2026-02-01 08:19:05.089705868 +0000 UTC m=+0.281570276 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., release=1766032510, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_ipmi, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 1 03:19:05 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:19:05 localhost podman[80170]: 2026-02-01 08:19:05.161736335 +0000 UTC m=+0.361764128 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 1 03:19:05 localhost podman[80170]: 2026-02-01 08:19:05.215876954 +0000 UTC m=+0.415904727 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:19:05 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:19:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:19:07 localhost podman[80286]: 2026-02-01 08:19:07.863496578 +0000 UTC m=+0.078858941 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:19:08 localhost podman[80286]: 2026-02-01 08:19:08.239794628 +0000 UTC m=+0.455156981 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:19:08 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:19:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:19:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:19:09 localhost systemd[1]: tmp-crun.Hp0Ndg.mount: Deactivated successfully. Feb 1 03:19:09 localhost podman[80309]: 2026-02-01 08:19:09.873957764 +0000 UTC m=+0.088711239 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc.) Feb 1 03:19:09 localhost podman[80310]: 2026-02-01 08:19:09.937797035 +0000 UTC m=+0.149469244 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:19:09 localhost podman[80309]: 2026-02-01 08:19:09.95076939 +0000 UTC m=+0.165522855 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, build-date=2026-01-12T22:56:19Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:19:09 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:19:09 localhost podman[80310]: 2026-02-01 08:19:09.963616051 +0000 UTC m=+0.175288260 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.buildah.version=1.41.5, container_name=ovn_controller, config_id=tripleo_step4, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:19:09 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:19:10 localhost systemd[1]: tmp-crun.bJALUJ.mount: Deactivated successfully. Feb 1 03:19:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:19:22 localhost systemd[1]: tmp-crun.pTNqxQ.mount: Deactivated successfully. Feb 1 03:19:22 localhost podman[80357]: 2026-02-01 08:19:22.877203957 +0000 UTC m=+0.089143943 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:19:23 localhost podman[80357]: 2026-02-01 08:19:23.062727765 +0000 UTC m=+0.274667761 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, container_name=metrics_qdr, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Feb 1 03:19:23 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:19:28 localhost systemd[1]: session-33.scope: Deactivated successfully. Feb 1 03:19:28 localhost systemd[1]: session-33.scope: Consumed 5.672s CPU time. Feb 1 03:19:28 localhost systemd-logind[761]: Session 33 logged out. Waiting for processes to exit. Feb 1 03:19:28 localhost systemd-logind[761]: Removed session 33. Feb 1 03:19:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:19:32 localhost systemd[1]: tmp-crun.iBGjSO.mount: Deactivated successfully. Feb 1 03:19:32 localhost podman[80431]: 2026-02-01 08:19:32.882155734 +0000 UTC m=+0.098144513 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.13, config_id=tripleo_step3, url=https://www.redhat.com) Feb 1 03:19:32 localhost podman[80431]: 2026-02-01 08:19:32.891775123 +0000 UTC m=+0.107763892 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, io.buildah.version=1.41.5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com) Feb 1 03:19:32 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:19:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:19:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:19:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:19:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:19:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:19:35 localhost podman[80467]: 2026-02-01 08:19:35.913554312 +0000 UTC m=+0.112433599 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, distribution-scope=public) Feb 1 03:19:35 localhost podman[80453]: 2026-02-01 08:19:35.873248944 +0000 UTC m=+0.085608622 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:19:35 localhost podman[80467]: 2026-02-01 08:19:35.934620868 +0000 UTC m=+0.133500175 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:19:35 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:19:35 localhost podman[80453]: 2026-02-01 08:19:35.957625486 +0000 UTC m=+0.169985094 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, version=17.1.13, batch=17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:19:35 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:19:35 localhost podman[80454]: 2026-02-01 08:19:35.88784123 +0000 UTC m=+0.095341936 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git) Feb 1 03:19:36 localhost podman[80455]: 2026-02-01 08:19:36.007547764 +0000 UTC m=+0.209438005 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, distribution-scope=public, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:19:36 localhost podman[80454]: 2026-02-01 08:19:36.024794752 +0000 UTC m=+0.232295438 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible) Feb 1 03:19:36 localhost podman[80452]: 2026-02-01 08:19:35.9808242 +0000 UTC m=+0.192945040 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:19:36 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:19:36 localhost podman[80455]: 2026-02-01 08:19:36.064532142 +0000 UTC m=+0.266422393 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container) Feb 1 03:19:36 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:19:36 localhost podman[80452]: 2026-02-01 08:19:36.115179282 +0000 UTC m=+0.327300192 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:19:36 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:19:36 localhost systemd[1]: tmp-crun.n2pEh3.mount: Deactivated successfully. Feb 1 03:19:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:19:38 localhost podman[80563]: 2026-02-01 08:19:38.877107412 +0000 UTC m=+0.094781978 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, container_name=nova_migration_target, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Feb 1 03:19:39 localhost podman[80563]: 2026-02-01 08:19:39.233080189 +0000 UTC m=+0.450754815 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.13, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, container_name=nova_migration_target, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:19:39 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:19:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:19:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:19:40 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:19:40 localhost recover_tripleo_nova_virtqemud[80599]: 62016 Feb 1 03:19:40 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:19:40 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:19:40 localhost podman[80586]: 2026-02-01 08:19:40.860950267 +0000 UTC m=+0.078358016 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:19:40 localhost podman[80587]: 2026-02-01 08:19:40.838356102 +0000 UTC m=+0.057322420 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, distribution-scope=public, url=https://www.redhat.com, container_name=ovn_controller, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 1 03:19:40 localhost podman[80586]: 2026-02-01 08:19:40.894610947 +0000 UTC m=+0.112018716 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.13, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 1 03:19:40 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:19:40 localhost podman[80587]: 2026-02-01 08:19:40.922690853 +0000 UTC m=+0.141657211 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:19:40 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:19:43 localhost sshd[80637]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:19:43 localhost systemd-logind[761]: New session 34 of user zuul. Feb 1 03:19:43 localhost systemd[1]: Started Session 34 of User zuul. Feb 1 03:19:43 localhost python3[80656]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 03:19:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:19:54 localhost systemd[1]: tmp-crun.hKs3q9.mount: Deactivated successfully. Feb 1 03:19:54 localhost podman[80658]: 2026-02-01 08:19:54.006996383 +0000 UTC m=+0.216521116 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, release=1766032510, config_id=tripleo_step1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, version=17.1.13) Feb 1 03:19:54 localhost podman[80658]: 2026-02-01 08:19:54.198638713 +0000 UTC m=+0.408163336 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:19:54 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:20:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:20:03 localhost podman[80764]: 2026-02-01 08:20:03.873716231 +0000 UTC m=+0.083765305 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git) Feb 1 03:20:03 localhost podman[80764]: 2026-02-01 08:20:03.910391555 +0000 UTC m=+0.120440639 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, config_id=tripleo_step3, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:20:03 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:20:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:20:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:20:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:20:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:20:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:20:06 localhost systemd[1]: tmp-crun.n6t1eo.mount: Deactivated successfully. Feb 1 03:20:06 localhost systemd[1]: tmp-crun.YbLDjI.mount: Deactivated successfully. Feb 1 03:20:06 localhost podman[80786]: 2026-02-01 08:20:06.912146328 +0000 UTC m=+0.122791813 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid) Feb 1 03:20:06 localhost podman[80786]: 2026-02-01 08:20:06.922670346 +0000 UTC m=+0.133315871 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, container_name=iscsid, io.openshift.expose-services=, vcs-type=git, version=17.1.13, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Feb 1 03:20:06 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:20:06 localhost podman[80784]: 2026-02-01 08:20:06.885547698 +0000 UTC m=+0.105785991 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron) Feb 1 03:20:06 localhost podman[80784]: 2026-02-01 08:20:06.965152451 +0000 UTC m=+0.185390824 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-cron-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:20:06 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:20:07 localhost podman[80793]: 2026-02-01 08:20:07.00965773 +0000 UTC m=+0.212959345 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:20:07 localhost podman[80793]: 2026-02-01 08:20:07.045634573 +0000 UTC m=+0.248936148 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:20:07 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:20:07 localhost podman[80785]: 2026-02-01 08:20:07.050486884 +0000 UTC m=+0.262073137 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team) Feb 1 03:20:07 localhost podman[80792]: 2026-02-01 08:20:07.112365505 +0000 UTC m=+0.319513960 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 1 03:20:07 localhost podman[80785]: 2026-02-01 08:20:07.13464747 +0000 UTC m=+0.346233713 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, distribution-scope=public, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z) Feb 1 03:20:07 localhost podman[80792]: 2026-02-01 08:20:07.143607749 +0000 UTC m=+0.350755904 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 1 03:20:07 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:20:07 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:20:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:20:09 localhost podman[80899]: 2026-02-01 08:20:09.857682237 +0000 UTC m=+0.075878198 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, vcs-type=git, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container) Feb 1 03:20:10 localhost podman[80899]: 2026-02-01 08:20:10.228861247 +0000 UTC m=+0.447057258 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:20:10 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:20:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:20:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:20:11 localhost systemd[1]: tmp-crun.uUb842.mount: Deactivated successfully. Feb 1 03:20:11 localhost podman[80924]: 2026-02-01 08:20:11.901560625 +0000 UTC m=+0.113505512 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:20:11 localhost podman[80924]: 2026-02-01 08:20:11.921970021 +0000 UTC m=+0.133914958 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z) Feb 1 03:20:11 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:20:11 localhost podman[80923]: 2026-02-01 08:20:11.904555918 +0000 UTC m=+0.121185262 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.5, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64) Feb 1 03:20:11 localhost podman[80923]: 2026-02-01 08:20:11.986432513 +0000 UTC m=+0.203061917 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git) Feb 1 03:20:11 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:20:14 localhost python3[80986]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 1 03:20:17 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 03:20:17 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 03:20:17 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 03:20:18 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 03:20:18 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 03:20:18 localhost systemd[1]: run-r0e32251fedf147668532c7f7cd047fa0.service: Deactivated successfully. Feb 1 03:20:18 localhost systemd[1]: run-rac78739e8e9b4f298edfd3baedad994e.service: Deactivated successfully. Feb 1 03:20:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:20:24 localhost podman[81137]: 2026-02-01 08:20:24.852760313 +0000 UTC m=+0.070828221 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.expose-services=, container_name=metrics_qdr, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64) Feb 1 03:20:25 localhost podman[81137]: 2026-02-01 08:20:25.064979365 +0000 UTC m=+0.283047133 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Feb 1 03:20:25 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:20:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:20:34 localhost podman[81212]: 2026-02-01 08:20:34.874650638 +0000 UTC m=+0.090065301 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, container_name=collectd, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, url=https://www.redhat.com) Feb 1 03:20:34 localhost podman[81212]: 2026-02-01 08:20:34.909134284 +0000 UTC m=+0.124548907 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, container_name=collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com) Feb 1 03:20:34 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:20:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:20:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:20:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:20:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:20:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:20:37 localhost podman[81235]: 2026-02-01 08:20:37.870341222 +0000 UTC m=+0.079651086 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 1 03:20:37 localhost podman[81235]: 2026-02-01 08:20:37.924648367 +0000 UTC m=+0.133958201 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1766032510, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 1 03:20:37 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:20:37 localhost podman[81234]: 2026-02-01 08:20:37.915351626 +0000 UTC m=+0.125242098 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.13) Feb 1 03:20:37 localhost podman[81233]: 2026-02-01 08:20:37.981465089 +0000 UTC m=+0.192821807 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, io.buildah.version=1.41.5, io.openshift.expose-services=, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true) Feb 1 03:20:37 localhost podman[81236]: 2026-02-01 08:20:37.941915655 +0000 UTC m=+0.144004804 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible) Feb 1 03:20:38 localhost podman[81234]: 2026-02-01 08:20:38.000771062 +0000 UTC m=+0.210661614 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.13, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:20:38 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:20:38 localhost podman[81233]: 2026-02-01 08:20:38.015606974 +0000 UTC m=+0.226963662 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, batch=17.1_20260112.1) Feb 1 03:20:38 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:20:38 localhost podman[81236]: 2026-02-01 08:20:38.072481978 +0000 UTC m=+0.274571157 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, build-date=2026-01-12T23:07:30Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5) Feb 1 03:20:38 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:20:38 localhost podman[81232]: 2026-02-01 08:20:38.09080215 +0000 UTC m=+0.303439478 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:20:38 localhost podman[81232]: 2026-02-01 08:20:38.127777784 +0000 UTC m=+0.340415062 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:20:38 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:20:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:20:40 localhost podman[81345]: 2026-02-01 08:20:40.862511175 +0000 UTC m=+0.078468889 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, version=17.1.13) Feb 1 03:20:41 localhost podman[81345]: 2026-02-01 08:20:41.240693364 +0000 UTC m=+0.456651008 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 1 03:20:41 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:20:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:20:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:20:42 localhost systemd[1]: tmp-crun.9Aph3o.mount: Deactivated successfully. Feb 1 03:20:42 localhost podman[81370]: 2026-02-01 08:20:42.862535234 +0000 UTC m=+0.079005535 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-type=git, io.openshift.expose-services=) Feb 1 03:20:42 localhost podman[81369]: 2026-02-01 08:20:42.916912561 +0000 UTC m=+0.131193494 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1) Feb 1 03:20:42 localhost podman[81370]: 2026-02-01 08:20:42.93612093 +0000 UTC m=+0.152591241 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:20:42 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:20:42 localhost podman[81369]: 2026-02-01 08:20:42.960931174 +0000 UTC m=+0.175212087 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent) Feb 1 03:20:42 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:20:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:20:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4946 writes, 22K keys, 4946 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4946 writes, 558 syncs, 8.86 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:20:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:20:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4734 writes, 21K keys, 4734 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4734 writes, 481 syncs, 9.84 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:20:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:20:55 localhost podman[81417]: 2026-02-01 08:20:55.863622929 +0000 UTC m=+0.076558890 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, container_name=metrics_qdr, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:20:56 localhost podman[81417]: 2026-02-01 08:20:56.066605072 +0000 UTC m=+0.279541063 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.13, release=1766032510) Feb 1 03:20:56 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:20:58 localhost python3[81523]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 03:21:02 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 03:21:02 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 03:21:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:21:05 localhost podman[81730]: 2026-02-01 08:21:05.87096577 +0000 UTC m=+0.086458867 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, distribution-scope=public, vcs-type=git, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:21:05 localhost podman[81730]: 2026-02-01 08:21:05.88154223 +0000 UTC m=+0.097035247 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true) Feb 1 03:21:05 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:21:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:21:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:21:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:21:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:21:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:21:08 localhost systemd[1]: tmp-crun.M5ecwZ.mount: Deactivated successfully. Feb 1 03:21:08 localhost podman[81750]: 2026-02-01 08:21:08.887481275 +0000 UTC m=+0.098725031 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step4, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-cron-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 1 03:21:08 localhost podman[81750]: 2026-02-01 08:21:08.9257633 +0000 UTC m=+0.137007106 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, architecture=x86_64, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.13, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5) Feb 1 03:21:08 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:21:08 localhost podman[81752]: 2026-02-01 08:21:08.94179929 +0000 UTC m=+0.148091371 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, url=https://www.redhat.com, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, distribution-scope=public, release=1766032510) Feb 1 03:21:08 localhost podman[81752]: 2026-02-01 08:21:08.982676925 +0000 UTC m=+0.188968976 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, release=1766032510, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Feb 1 03:21:08 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:d0:c8:c4 MACPROTO=0800 SRC=82.147.84.55 DST=38.102.83.164 LEN=40 TOS=0x08 PREC=0x20 TTL=242 ID=61520 PROTO=TCP SPT=53998 DPT=9090 SEQ=1487044726 ACK=0 WINDOW=1024 RES=0x00 SYN URGP=0 Feb 1 03:21:08 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:21:09 localhost podman[81753]: 2026-02-01 08:21:08.999394537 +0000 UTC m=+0.205723669 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1) Feb 1 03:21:09 localhost podman[81751]: 2026-02-01 08:21:09.04500908 +0000 UTC m=+0.253294734 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:21:09 localhost podman[81753]: 2026-02-01 08:21:09.058711247 +0000 UTC m=+0.265040389 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true) Feb 1 03:21:09 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:21:09 localhost podman[81751]: 2026-02-01 08:21:09.076664378 +0000 UTC m=+0.284950052 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.5, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510, version=17.1.13) Feb 1 03:21:09 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:21:09 localhost podman[81759]: 2026-02-01 08:21:09.144317639 +0000 UTC m=+0.342630231 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible) Feb 1 03:21:09 localhost podman[81759]: 2026-02-01 08:21:09.200738699 +0000 UTC m=+0.399051291 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 1 03:21:09 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:21:09 localhost systemd[1]: tmp-crun.4MlpO6.mount: Deactivated successfully. Feb 1 03:21:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:21:11 localhost podman[81866]: 2026-02-01 08:21:11.859800729 +0000 UTC m=+0.073879585 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:21:12 localhost podman[81866]: 2026-02-01 08:21:12.214583029 +0000 UTC m=+0.428661965 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, release=1766032510, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Feb 1 03:21:12 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:21:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:21:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:21:13 localhost systemd[1]: tmp-crun.MWGvA9.mount: Deactivated successfully. Feb 1 03:21:13 localhost podman[81890]: 2026-02-01 08:21:13.883495647 +0000 UTC m=+0.090246447 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:21:13 localhost systemd[1]: tmp-crun.nYPi31.mount: Deactivated successfully. Feb 1 03:21:13 localhost podman[81891]: 2026-02-01 08:21:13.941181667 +0000 UTC m=+0.144277552 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13) Feb 1 03:21:13 localhost podman[81890]: 2026-02-01 08:21:13.967647143 +0000 UTC m=+0.174397953 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, version=17.1.13, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team) Feb 1 03:21:13 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:21:13 localhost podman[81891]: 2026-02-01 08:21:13.992737196 +0000 UTC m=+0.195833131 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, version=17.1.13, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510) Feb 1 03:21:14 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:21:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:21:26 localhost podman[81934]: 2026-02-01 08:21:26.861060818 +0000 UTC m=+0.079639756 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.13, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:21:27 localhost podman[81934]: 2026-02-01 08:21:27.058082115 +0000 UTC m=+0.276661073 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:21:27 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:21:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:21:30 localhost recover_tripleo_nova_virtqemud[81964]: 62016 Feb 1 03:21:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:21:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:21:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:21:36 localhost systemd[1]: tmp-crun.v7MfNX.mount: Deactivated successfully. Feb 1 03:21:36 localhost podman[82010]: 2026-02-01 08:21:36.873313903 +0000 UTC m=+0.092257549 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, container_name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public) Feb 1 03:21:36 localhost podman[82010]: 2026-02-01 08:21:36.890136668 +0000 UTC m=+0.109080324 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, vcs-type=git, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:21:36 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:21:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:21:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:21:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:21:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:21:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:21:39 localhost podman[82031]: 2026-02-01 08:21:39.887418271 +0000 UTC m=+0.100268529 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:21:39 localhost podman[82039]: 2026-02-01 08:21:39.940440956 +0000 UTC m=+0.144670525 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510) Feb 1 03:21:39 localhost podman[82033]: 2026-02-01 08:21:39.985431009 +0000 UTC m=+0.192958101 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, release=1766032510) Feb 1 03:21:39 localhost podman[82032]: 2026-02-01 08:21:39.992720267 +0000 UTC m=+0.203118369 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible) Feb 1 03:21:39 localhost podman[82031]: 2026-02-01 08:21:39.999039114 +0000 UTC m=+0.211889412 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:21:40 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:21:40 localhost podman[82039]: 2026-02-01 08:21:40.017099597 +0000 UTC m=+0.221329196 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 1 03:21:40 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:21:40 localhost podman[82033]: 2026-02-01 08:21:40.041202729 +0000 UTC m=+0.248729861 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 1 03:21:40 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:21:40 localhost podman[82032]: 2026-02-01 08:21:40.055109773 +0000 UTC m=+0.265507855 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, config_id=tripleo_step3, vcs-type=git, release=1766032510, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:21:40 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:21:40 localhost podman[82030]: 2026-02-01 08:21:39.9213716 +0000 UTC m=+0.137732068 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, architecture=x86_64) Feb 1 03:21:40 localhost podman[82030]: 2026-02-01 08:21:40.101733208 +0000 UTC m=+0.318093676 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 1 03:21:40 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:21:40 localhost systemd[1]: tmp-crun.r3jBll.mount: Deactivated successfully. Feb 1 03:21:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:21:42 localhost podman[82148]: 2026-02-01 08:21:42.861432229 +0000 UTC m=+0.076721965 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_migration_target, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:21:43 localhost podman[82148]: 2026-02-01 08:21:43.228648866 +0000 UTC m=+0.443938612 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510) Feb 1 03:21:43 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:21:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:21:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:21:44 localhost systemd[1]: tmp-crun.tZc4a0.mount: Deactivated successfully. Feb 1 03:21:44 localhost podman[82170]: 2026-02-01 08:21:44.879489231 +0000 UTC m=+0.084534608 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:21:44 localhost podman[82171]: 2026-02-01 08:21:44.858958701 +0000 UTC m=+0.065221457 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team) Feb 1 03:21:44 localhost podman[82171]: 2026-02-01 08:21:44.941704492 +0000 UTC m=+0.147967218 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:21:44 localhost podman[82170]: 2026-02-01 08:21:44.952426426 +0000 UTC m=+0.157471853 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:21:44 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:21:44 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:21:51 localhost python3[82233]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 03:21:54 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 03:21:55 localhost rhsm-service[6583]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 1 03:21:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:21:57 localhost systemd[1]: tmp-crun.XOo4jW.mount: Deactivated successfully. Feb 1 03:21:57 localhost podman[82364]: 2026-02-01 08:21:57.877177607 +0000 UTC m=+0.092662901 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=metrics_qdr, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, architecture=x86_64) Feb 1 03:21:58 localhost podman[82364]: 2026-02-01 08:21:58.071395147 +0000 UTC m=+0.286880381 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, architecture=x86_64, config_id=tripleo_step1, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:21:58 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:22:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:22:07 localhost systemd[1]: tmp-crun.YSS0Ws.mount: Deactivated successfully. Feb 1 03:22:07 localhost podman[82576]: 2026-02-01 08:22:07.864604559 +0000 UTC m=+0.080359679 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:22:07 localhost podman[82576]: 2026-02-01 08:22:07.879607937 +0000 UTC m=+0.095363037 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step3, vendor=Red Hat, Inc., release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:22:07 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:22:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:22:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:22:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:22:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:22:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:22:10 localhost systemd[1]: tmp-crun.Bi5pbu.mount: Deactivated successfully. Feb 1 03:22:10 localhost podman[82596]: 2026-02-01 08:22:10.91758847 +0000 UTC m=+0.132810915 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, name=rhosp-rhel9/openstack-cron, architecture=x86_64, build-date=2026-01-12T22:10:15Z, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team) Feb 1 03:22:10 localhost podman[82596]: 2026-02-01 08:22:10.962476261 +0000 UTC m=+0.177698676 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13) Feb 1 03:22:10 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:22:10 localhost podman[82598]: 2026-02-01 08:22:10.977649464 +0000 UTC m=+0.184734785 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:22:10 localhost podman[82598]: 2026-02-01 08:22:10.988831722 +0000 UTC m=+0.195917073 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Feb 1 03:22:11 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:22:11 localhost podman[82597]: 2026-02-01 08:22:10.969734257 +0000 UTC m=+0.180344838 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1) Feb 1 03:22:11 localhost podman[82599]: 2026-02-01 08:22:11.035518379 +0000 UTC m=+0.239719570 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=) Feb 1 03:22:11 localhost podman[82605]: 2026-02-01 08:22:10.94099294 +0000 UTC m=+0.142381543 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.13, config_id=tripleo_step4, vcs-type=git, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:22:11 localhost podman[82597]: 2026-02-01 08:22:11.050735934 +0000 UTC m=+0.261346515 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:22:11 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:22:11 localhost podman[82605]: 2026-02-01 08:22:11.073585476 +0000 UTC m=+0.274974049 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1766032510, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:22:11 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:22:11 localhost podman[82599]: 2026-02-01 08:22:11.09067618 +0000 UTC m=+0.294877311 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5) Feb 1 03:22:11 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:22:11 localhost systemd[1]: tmp-crun.h9NXoB.mount: Deactivated successfully. Feb 1 03:22:11 localhost python3[82723]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Feb 1 03:22:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:22:13 localhost systemd[1]: tmp-crun.XJN8YL.mount: Deactivated successfully. Feb 1 03:22:13 localhost podman[82724]: 2026-02-01 08:22:13.869901661 +0000 UTC m=+0.086250732 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, distribution-scope=public, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Feb 1 03:22:14 localhost podman[82724]: 2026-02-01 08:22:14.233797934 +0000 UTC m=+0.450147015 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, container_name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 1 03:22:14 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:22:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:22:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:22:15 localhost systemd[1]: tmp-crun.WUaCIb.mount: Deactivated successfully. Feb 1 03:22:15 localhost podman[82745]: 2026-02-01 08:22:15.863355604 +0000 UTC m=+0.082631408 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z) Feb 1 03:22:15 localhost systemd[1]: tmp-crun.T2ByYo.mount: Deactivated successfully. Feb 1 03:22:15 localhost podman[82745]: 2026-02-01 08:22:15.911557208 +0000 UTC m=+0.130833022 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:22:15 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:22:15 localhost podman[82746]: 2026-02-01 08:22:15.918496475 +0000 UTC m=+0.134048523 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13) Feb 1 03:22:16 localhost podman[82746]: 2026-02-01 08:22:16.007776451 +0000 UTC m=+0.223328459 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:22:16 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:22:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:22:28 localhost systemd[1]: tmp-crun.tz0iIv.mount: Deactivated successfully. Feb 1 03:22:28 localhost podman[82793]: 2026-02-01 08:22:28.874671318 +0000 UTC m=+0.089819894 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, distribution-scope=public, batch=17.1_20260112.1, container_name=metrics_qdr, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.) Feb 1 03:22:29 localhost podman[82793]: 2026-02-01 08:22:29.066693018 +0000 UTC m=+0.281841584 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step1, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 03:22:29 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:22:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:22:38 localhost podman[82867]: 2026-02-01 08:22:38.86916832 +0000 UTC m=+0.084118865 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=collectd, maintainer=OpenStack TripleO Team, version=17.1.13, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 1 03:22:38 localhost podman[82867]: 2026-02-01 08:22:38.88162055 +0000 UTC m=+0.096571075 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, container_name=collectd, build-date=2026-01-12T22:10:15Z, version=17.1.13, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:22:38 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:22:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:22:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:22:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:22:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:22:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:22:41 localhost podman[82890]: 2026-02-01 08:22:41.881466546 +0000 UTC m=+0.092720400 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:22:41 localhost podman[82890]: 2026-02-01 08:22:41.918761902 +0000 UTC m=+0.130015716 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:22:41 localhost podman[82894]: 2026-02-01 08:22:41.934599912 +0000 UTC m=+0.137389315 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 1 03:22:41 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:22:41 localhost podman[82894]: 2026-02-01 08:22:41.970865247 +0000 UTC m=+0.173654670 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:22:41 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:22:42 localhost podman[82891]: 2026-02-01 08:22:42.050417935 +0000 UTC m=+0.255351022 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, distribution-scope=public, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z) Feb 1 03:22:42 localhost podman[82888]: 2026-02-01 08:22:42.017988883 +0000 UTC m=+0.230760242 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, release=1766032510, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 1 03:22:42 localhost podman[82889]: 2026-02-01 08:22:42.084878527 +0000 UTC m=+0.296305796 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z) Feb 1 03:22:42 localhost podman[82891]: 2026-02-01 08:22:42.08770005 +0000 UTC m=+0.292633187 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., release=1766032510, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 1 03:22:42 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:22:42 localhost podman[82888]: 2026-02-01 08:22:42.103944072 +0000 UTC m=+0.316715361 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 1 03:22:42 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:22:42 localhost podman[82889]: 2026-02-01 08:22:42.120606676 +0000 UTC m=+0.332033935 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, url=https://www.redhat.com, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:22:42 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:22:42 localhost systemd[1]: tmp-crun.FlcBUP.mount: Deactivated successfully. Feb 1 03:22:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:22:44 localhost podman[83005]: 2026-02-01 08:22:44.858611661 +0000 UTC m=+0.074911003 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:22:45 localhost podman[83005]: 2026-02-01 08:22:45.23365067 +0000 UTC m=+0.449950042 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z) Feb 1 03:22:45 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:22:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:22:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:22:46 localhost podman[83028]: 2026-02-01 08:22:46.863552477 +0000 UTC m=+0.074432927 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, vcs-type=git, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:22:46 localhost podman[83028]: 2026-02-01 08:22:46.902748339 +0000 UTC m=+0.113628779 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, container_name=ovn_metadata_agent, release=1766032510, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 1 03:22:46 localhost systemd[1]: tmp-crun.y4DxLW.mount: Deactivated successfully. Feb 1 03:22:46 localhost podman[83029]: 2026-02-01 08:22:46.922235157 +0000 UTC m=+0.129534601 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, batch=17.1_20260112.1, release=1766032510) Feb 1 03:22:46 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:22:46 localhost podman[83029]: 2026-02-01 08:22:46.946068454 +0000 UTC m=+0.153367918 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:22:46 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:22:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:22:59 localhost systemd[1]: tmp-crun.xwY61N.mount: Deactivated successfully. Feb 1 03:22:59 localhost podman[83075]: 2026-02-01 08:22:59.866264064 +0000 UTC m=+0.087747823 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:23:00 localhost podman[83075]: 2026-02-01 08:23:00.098382536 +0000 UTC m=+0.319866235 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, release=1766032510) Feb 1 03:23:00 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:23:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:23:09 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:23:09 localhost recover_tripleo_nova_virtqemud[83183]: 62016 Feb 1 03:23:09 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:23:09 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:23:09 localhost podman[83181]: 2026-02-01 08:23:09.875308926 +0000 UTC m=+0.089126324 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:23:09 localhost podman[83181]: 2026-02-01 08:23:09.890271169 +0000 UTC m=+0.104088567 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, release=1766032510, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=collectd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git) Feb 1 03:23:09 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:23:12 localhost systemd[1]: session-34.scope: Deactivated successfully. Feb 1 03:23:12 localhost systemd[1]: session-34.scope: Consumed 18.941s CPU time. Feb 1 03:23:12 localhost systemd-logind[761]: Session 34 logged out. Waiting for processes to exit. Feb 1 03:23:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:23:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:23:12 localhost systemd-logind[761]: Removed session 34. Feb 1 03:23:12 localhost podman[83205]: 2026-02-01 08:23:12.110439227 +0000 UTC m=+0.083267939 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, architecture=x86_64, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc.) Feb 1 03:23:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:23:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:23:12 localhost podman[83205]: 2026-02-01 08:23:12.167468778 +0000 UTC m=+0.140297530 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, config_id=tripleo_step4) Feb 1 03:23:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:23:12 localhost systemd[1]: tmp-crun.GyaFgG.mount: Deactivated successfully. Feb 1 03:23:12 localhost podman[83204]: 2026-02-01 08:23:12.178751963 +0000 UTC m=+0.152956337 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 1 03:23:12 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:23:12 localhost podman[83204]: 2026-02-01 08:23:12.217617785 +0000 UTC m=+0.191822169 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=iscsid, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:23:12 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:23:12 localhost podman[83234]: 2026-02-01 08:23:12.231162017 +0000 UTC m=+0.101643375 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.13, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 1 03:23:12 localhost podman[83263]: 2026-02-01 08:23:12.266563206 +0000 UTC m=+0.084700442 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=nova_compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 1 03:23:12 localhost podman[83235]: 2026-02-01 08:23:12.278239062 +0000 UTC m=+0.143479455 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, version=17.1.13, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1) Feb 1 03:23:12 localhost podman[83235]: 2026-02-01 08:23:12.288710763 +0000 UTC m=+0.153951156 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 1 03:23:12 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:23:12 localhost podman[83234]: 2026-02-01 08:23:12.330887574 +0000 UTC m=+0.201368952 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, build-date=2026-01-12T23:07:47Z, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_id=tripleo_step4) Feb 1 03:23:12 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:23:12 localhost podman[83263]: 2026-02-01 08:23:12.343622132 +0000 UTC m=+0.161759398 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step5, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:23:12 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:23:15 localhost podman[83314]: 2026-02-01 08:23:15.865772424 +0000 UTC m=+0.082542017 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 1 03:23:16 localhost podman[83314]: 2026-02-01 08:23:16.223668196 +0000 UTC m=+0.440437789 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, architecture=x86_64, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:23:16 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:23:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:23:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:23:17 localhost podman[83339]: 2026-02-01 08:23:17.856219424 +0000 UTC m=+0.067218294 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 1 03:23:17 localhost podman[83339]: 2026-02-01 08:23:17.907424122 +0000 UTC m=+0.118423032 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, build-date=2026-01-12T22:36:40Z, vcs-type=git, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:23:17 localhost systemd[1]: tmp-crun.XZgtpW.mount: Deactivated successfully. Feb 1 03:23:17 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:23:17 localhost podman[83338]: 2026-02-01 08:23:17.925052894 +0000 UTC m=+0.135165669 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, distribution-scope=public, container_name=ovn_metadata_agent, release=1766032510) Feb 1 03:23:17 localhost podman[83338]: 2026-02-01 08:23:17.970301336 +0000 UTC m=+0.180414101 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, vcs-type=git, container_name=ovn_metadata_agent, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Feb 1 03:23:17 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:23:20 localhost sshd[83386]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:23:21 localhost sshd[83387]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:23:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:23:30 localhost podman[83388]: 2026-02-01 08:23:30.875930883 +0000 UTC m=+0.090784112 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, batch=17.1_20260112.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:23:31 localhost podman[83388]: 2026-02-01 08:23:31.069272396 +0000 UTC m=+0.284125665 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, version=17.1.13, tcib_managed=true, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:23:31 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:23:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:23:40 localhost systemd[1]: tmp-crun.tqlSXz.mount: Deactivated successfully. Feb 1 03:23:40 localhost podman[83463]: 2026-02-01 08:23:40.866191687 +0000 UTC m=+0.085338981 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd) Feb 1 03:23:40 localhost podman[83463]: 2026-02-01 08:23:40.905692538 +0000 UTC m=+0.124839812 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64) Feb 1 03:23:40 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:23:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:23:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:23:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:23:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:23:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:23:42 localhost podman[83483]: 2026-02-01 08:23:42.880893013 +0000 UTC m=+0.087921237 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.expose-services=) Feb 1 03:23:42 localhost podman[83483]: 2026-02-01 08:23:42.889410046 +0000 UTC m=+0.096438280 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, tcib_managed=true, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vcs-type=git) Feb 1 03:23:42 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:23:42 localhost systemd[1]: tmp-crun.j02HBJ.mount: Deactivated successfully. Feb 1 03:23:42 localhost podman[83493]: 2026-02-01 08:23:42.998124579 +0000 UTC m=+0.198086903 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Feb 1 03:23:43 localhost podman[83484]: 2026-02-01 08:23:42.952332812 +0000 UTC m=+0.153895824 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, url=https://www.redhat.com) Feb 1 03:23:43 localhost podman[83484]: 2026-02-01 08:23:43.037866758 +0000 UTC m=+0.239429830 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:23:43 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:23:43 localhost podman[83493]: 2026-02-01 08:23:43.059589932 +0000 UTC m=+0.259552256 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible) Feb 1 03:23:43 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:23:43 localhost podman[83482]: 2026-02-01 08:23:43.039225168 +0000 UTC m=+0.247861060 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step5, tcib_managed=true) Feb 1 03:23:43 localhost podman[83481]: 2026-02-01 08:23:43.145281453 +0000 UTC m=+0.358384507 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4) Feb 1 03:23:43 localhost podman[83481]: 2026-02-01 08:23:43.158579938 +0000 UTC m=+0.371683042 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team) Feb 1 03:23:43 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:23:43 localhost podman[83482]: 2026-02-01 08:23:43.172074217 +0000 UTC m=+0.380710129 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 1 03:23:43 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:23:43 localhost systemd[1]: tmp-crun.P0WUTL.mount: Deactivated successfully. Feb 1 03:23:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:23:46 localhost systemd[1]: tmp-crun.N8lusH.mount: Deactivated successfully. Feb 1 03:23:46 localhost podman[83595]: 2026-02-01 08:23:46.872705262 +0000 UTC m=+0.088269368 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, version=17.1.13, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1) Feb 1 03:23:47 localhost podman[83595]: 2026-02-01 08:23:47.24193267 +0000 UTC m=+0.457496776 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:23:47 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:23:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:23:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:23:48 localhost podman[83616]: 2026-02-01 08:23:48.860912864 +0000 UTC m=+0.080408006 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:23:48 localhost podman[83617]: 2026-02-01 08:23:48.917157431 +0000 UTC m=+0.132631053 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, build-date=2026-01-12T22:36:40Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:23:48 localhost podman[83616]: 2026-02-01 08:23:48.931680322 +0000 UTC m=+0.151175414 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible) Feb 1 03:23:48 localhost podman[83617]: 2026-02-01 08:23:48.943979727 +0000 UTC m=+0.159453359 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 1 03:23:48 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:23:48 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:24:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:24:01 localhost podman[83664]: 2026-02-01 08:24:01.871468164 +0000 UTC m=+0.086873677 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 1 03:24:02 localhost podman[83664]: 2026-02-01 08:24:02.073201596 +0000 UTC m=+0.288607059 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, release=1766032510, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true) Feb 1 03:24:02 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:24:04 localhost podman[83797]: 2026-02-01 08:24:04.190124623 +0000 UTC m=+0.079131797 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1764794109, distribution-scope=public, architecture=x86_64, RELEASE=main, name=rhceph) Feb 1 03:24:04 localhost podman[83797]: 2026-02-01 08:24:04.290909632 +0000 UTC m=+0.179916806 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, release=1764794109, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, version=7, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Feb 1 03:24:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:24:11 localhost podman[83943]: 2026-02-01 08:24:11.858467662 +0000 UTC m=+0.073817729 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, tcib_managed=true, version=17.1.13, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container) Feb 1 03:24:11 localhost podman[83943]: 2026-02-01 08:24:11.892880043 +0000 UTC m=+0.108230090 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:24:11 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:24:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:24:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:24:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:24:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:24:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:24:13 localhost systemd[1]: tmp-crun.z3K9As.mount: Deactivated successfully. Feb 1 03:24:13 localhost podman[83964]: 2026-02-01 08:24:13.894765469 +0000 UTC m=+0.102963583 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, release=1766032510, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:24:13 localhost podman[83964]: 2026-02-01 08:24:13.923599865 +0000 UTC m=+0.131797979 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, distribution-scope=public, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:24:13 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:24:13 localhost systemd[1]: tmp-crun.Jqc0d0.mount: Deactivated successfully. Feb 1 03:24:13 localhost podman[83963]: 2026-02-01 08:24:13.951485462 +0000 UTC m=+0.162858611 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond) Feb 1 03:24:13 localhost podman[83963]: 2026-02-01 08:24:13.991751525 +0000 UTC m=+0.203124664 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, distribution-scope=public) Feb 1 03:24:14 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:24:14 localhost podman[83965]: 2026-02-01 08:24:13.9949446 +0000 UTC m=+0.198010143 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 1 03:24:14 localhost podman[83977]: 2026-02-01 08:24:14.053519427 +0000 UTC m=+0.247781108 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, config_id=tripleo_step4, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:24:14 localhost podman[83965]: 2026-02-01 08:24:14.079721803 +0000 UTC m=+0.282787286 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:24:14 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:24:14 localhost podman[83977]: 2026-02-01 08:24:14.13154964 +0000 UTC m=+0.325811281 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 1 03:24:14 localhost podman[83971]: 2026-02-01 08:24:14.154241533 +0000 UTC m=+0.352961996 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, build-date=2026-01-12T23:07:47Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:24:14 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:24:14 localhost podman[83971]: 2026-02-01 08:24:14.211901413 +0000 UTC m=+0.410621936 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public) Feb 1 03:24:14 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:24:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:24:17 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:24:17 localhost recover_tripleo_nova_virtqemud[84090]: 62016 Feb 1 03:24:17 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:24:17 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:24:17 localhost podman[84082]: 2026-02-01 08:24:17.876499861 +0000 UTC m=+0.085472796 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:24:18 localhost podman[84082]: 2026-02-01 08:24:18.285740965 +0000 UTC m=+0.494713860 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_migration_target, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, distribution-scope=public) Feb 1 03:24:18 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:24:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:24:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:24:19 localhost podman[84108]: 2026-02-01 08:24:19.875374409 +0000 UTC m=+0.089877226 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent) Feb 1 03:24:19 localhost podman[84109]: 2026-02-01 08:24:19.92906729 +0000 UTC m=+0.141057253 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, distribution-scope=public, vcs-type=git, architecture=x86_64) Feb 1 03:24:19 localhost podman[84108]: 2026-02-01 08:24:19.947735744 +0000 UTC m=+0.162238531 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.openshift.expose-services=) Feb 1 03:24:19 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:24:20 localhost podman[84109]: 2026-02-01 08:24:20.000662474 +0000 UTC m=+0.212652487 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, version=17.1.13, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5) Feb 1 03:24:20 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:24:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:24:32 localhost systemd[1]: tmp-crun.tKb2jn.mount: Deactivated successfully. Feb 1 03:24:32 localhost podman[84155]: 2026-02-01 08:24:32.883017711 +0000 UTC m=+0.092838894 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, release=1766032510, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:24:33 localhost podman[84155]: 2026-02-01 08:24:33.131726805 +0000 UTC m=+0.341548008 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 1 03:24:33 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:24:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:24:42 localhost systemd[1]: tmp-crun.G4IO2Y.mount: Deactivated successfully. Feb 1 03:24:42 localhost podman[84231]: 2026-02-01 08:24:42.876342737 +0000 UTC m=+0.091424752 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:24:42 localhost podman[84231]: 2026-02-01 08:24:42.88962571 +0000 UTC m=+0.104707795 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.openshift.expose-services=, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container) Feb 1 03:24:42 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:24:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:24:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:24:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:24:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:24:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:24:44 localhost systemd[1]: tmp-crun.JbSsbN.mount: Deactivated successfully. Feb 1 03:24:44 localhost podman[84252]: 2026-02-01 08:24:44.871264348 +0000 UTC m=+0.088287900 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, batch=17.1_20260112.1, version=17.1.13) Feb 1 03:24:44 localhost podman[84252]: 2026-02-01 08:24:44.883479169 +0000 UTC m=+0.100502721 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:24:44 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:24:44 localhost podman[84253]: 2026-02-01 08:24:44.914924042 +0000 UTC m=+0.125216033 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git) Feb 1 03:24:44 localhost podman[84261]: 2026-02-01 08:24:44.88419072 +0000 UTC m=+0.085859686 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc.) Feb 1 03:24:44 localhost podman[84261]: 2026-02-01 08:24:44.966607235 +0000 UTC m=+0.168276091 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public) Feb 1 03:24:44 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:24:44 localhost podman[84258]: 2026-02-01 08:24:44.988515534 +0000 UTC m=+0.194289522 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:24:45 localhost podman[84254]: 2026-02-01 08:24:45.030924532 +0000 UTC m=+0.239314138 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, container_name=iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:24:45 localhost podman[84254]: 2026-02-01 08:24:45.03962778 +0000 UTC m=+0.248017406 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Feb 1 03:24:45 localhost podman[84258]: 2026-02-01 08:24:45.048674998 +0000 UTC m=+0.254449036 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team) Feb 1 03:24:45 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:24:45 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:24:45 localhost podman[84253]: 2026-02-01 08:24:45.061955842 +0000 UTC m=+0.272247933 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13) Feb 1 03:24:45 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:24:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:24:48 localhost podman[84370]: 2026-02-01 08:24:48.867824987 +0000 UTC m=+0.083283810 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:24:49 localhost podman[84370]: 2026-02-01 08:24:49.225665248 +0000 UTC m=+0.441124041 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Feb 1 03:24:49 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:24:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:24:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:24:50 localhost podman[84391]: 2026-02-01 08:24:50.869986132 +0000 UTC m=+0.082275560 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:24:50 localhost systemd[1]: tmp-crun.ikFB8J.mount: Deactivated successfully. Feb 1 03:24:50 localhost podman[84392]: 2026-02-01 08:24:50.924004963 +0000 UTC m=+0.133812798 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510) Feb 1 03:24:50 localhost podman[84391]: 2026-02-01 08:24:50.934178075 +0000 UTC m=+0.146467503 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.5) Feb 1 03:24:50 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:24:50 localhost podman[84392]: 2026-02-01 08:24:50.953620252 +0000 UTC m=+0.163428087 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, vcs-type=git) Feb 1 03:24:50 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:25:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:25:03 localhost podman[84437]: 2026-02-01 08:25:03.866581095 +0000 UTC m=+0.079959642 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, version=17.1.13, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:25:04 localhost podman[84437]: 2026-02-01 08:25:04.084546257 +0000 UTC m=+0.297924784 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:25:04 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:25:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:25:13 localhost podman[84543]: 2026-02-01 08:25:13.879911466 +0000 UTC m=+0.088802975 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, release=1766032510, io.buildah.version=1.41.5, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, version=17.1.13, config_id=tripleo_step3, container_name=collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:25:13 localhost podman[84543]: 2026-02-01 08:25:13.890664215 +0000 UTC m=+0.099555724 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.buildah.version=1.41.5, architecture=x86_64, container_name=collectd, distribution-scope=public, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.13, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:25:13 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:25:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:25:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:25:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:25:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:25:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:25:15 localhost podman[84567]: 2026-02-01 08:25:15.883026168 +0000 UTC m=+0.087607888 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1766032510, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4) Feb 1 03:25:15 localhost systemd[1]: tmp-crun.NX1yR1.mount: Deactivated successfully. Feb 1 03:25:15 localhost podman[84567]: 2026-02-01 08:25:15.942824691 +0000 UTC m=+0.147406441 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_id=tripleo_step4, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:25:15 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:25:15 localhost podman[84564]: 2026-02-01 08:25:15.945455149 +0000 UTC m=+0.160089398 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.buildah.version=1.41.5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T22:10:15Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4) Feb 1 03:25:16 localhost podman[84565]: 2026-02-01 08:25:15.999558704 +0000 UTC m=+0.209122932 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1766032510, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:25:16 localhost podman[84564]: 2026-02-01 08:25:16.028647296 +0000 UTC m=+0.243281535 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 1 03:25:16 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:25:16 localhost podman[84573]: 2026-02-01 08:25:16.046283098 +0000 UTC m=+0.247910971 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 1 03:25:16 localhost podman[84565]: 2026-02-01 08:25:16.053655277 +0000 UTC m=+0.263219475 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.13, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, config_id=tripleo_step5) Feb 1 03:25:16 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:25:16 localhost podman[84566]: 2026-02-01 08:25:16.085236323 +0000 UTC m=+0.293819233 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, container_name=iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:25:16 localhost podman[84573]: 2026-02-01 08:25:16.101811855 +0000 UTC m=+0.303439718 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:25:16 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:25:16 localhost podman[84566]: 2026-02-01 08:25:16.121666843 +0000 UTC m=+0.330249753 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, architecture=x86_64, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5) Feb 1 03:25:16 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:25:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:25:19 localhost systemd[1]: tmp-crun.aQz4x6.mount: Deactivated successfully. Feb 1 03:25:19 localhost podman[84682]: 2026-02-01 08:25:19.862350846 +0000 UTC m=+0.079024994 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, version=17.1.13, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, batch=17.1_20260112.1) Feb 1 03:25:20 localhost podman[84682]: 2026-02-01 08:25:20.230618866 +0000 UTC m=+0.447293054 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510) Feb 1 03:25:20 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:25:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:25:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:25:21 localhost systemd[1]: tmp-crun.xTf2Yd.mount: Deactivated successfully. Feb 1 03:25:21 localhost podman[84705]: 2026-02-01 08:25:21.883488744 +0000 UTC m=+0.090330109 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, batch=17.1_20260112.1, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:25:21 localhost podman[84705]: 2026-02-01 08:25:21.936313561 +0000 UTC m=+0.143154966 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.13, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, container_name=ovn_metadata_agent, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:25:21 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:25:21 localhost podman[84706]: 2026-02-01 08:25:21.939765543 +0000 UTC m=+0.143112404 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 1 03:25:22 localhost podman[84706]: 2026-02-01 08:25:22.022852607 +0000 UTC m=+0.226199438 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z) Feb 1 03:25:22 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:25:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:25:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:25:34 localhost recover_tripleo_nova_virtqemud[84755]: 62016 Feb 1 03:25:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:25:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:25:34 localhost systemd[1]: tmp-crun.447MDf.mount: Deactivated successfully. Feb 1 03:25:34 localhost podman[84753]: 2026-02-01 08:25:34.872357969 +0000 UTC m=+0.082685693 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510, architecture=x86_64, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Feb 1 03:25:35 localhost podman[84753]: 2026-02-01 08:25:35.091055433 +0000 UTC m=+0.301383187 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:25:35 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:25:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:25:44 localhost podman[84830]: 2026-02-01 08:25:44.872791674 +0000 UTC m=+0.089179776 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, version=17.1.13, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:25:44 localhost podman[84830]: 2026-02-01 08:25:44.88309444 +0000 UTC m=+0.099482532 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:25:44 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:25:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:25:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:25:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:25:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:25:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:25:46 localhost systemd[1]: tmp-crun.LzjGFO.mount: Deactivated successfully. Feb 1 03:25:46 localhost podman[84850]: 2026-02-01 08:25:46.910236496 +0000 UTC m=+0.120612697 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64) Feb 1 03:25:46 localhost podman[84850]: 2026-02-01 08:25:46.921663035 +0000 UTC m=+0.132039206 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20260112.1, architecture=x86_64, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, io.openshift.expose-services=) Feb 1 03:25:46 localhost podman[84851]: 2026-02-01 08:25:46.937858134 +0000 UTC m=+0.145007100 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:25:46 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:25:46 localhost podman[84855]: 2026-02-01 08:25:46.985430945 +0000 UTC m=+0.186676956 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, architecture=x86_64, vendor=Red Hat, Inc.) Feb 1 03:25:46 localhost podman[84851]: 2026-02-01 08:25:46.993569686 +0000 UTC m=+0.200718602 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, config_id=tripleo_step5, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, tcib_managed=true, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vendor=Red Hat, Inc.) Feb 1 03:25:47 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:25:47 localhost podman[84855]: 2026-02-01 08:25:47.015837096 +0000 UTC m=+0.217083087 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com) Feb 1 03:25:47 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:25:47 localhost podman[84852]: 2026-02-01 08:25:47.085253835 +0000 UTC m=+0.288209557 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-type=git, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:25:47 localhost podman[84859]: 2026-02-01 08:25:47.089948114 +0000 UTC m=+0.288209416 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5) Feb 1 03:25:47 localhost podman[84852]: 2026-02-01 08:25:47.09453816 +0000 UTC m=+0.297493882 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, url=https://www.redhat.com, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team) Feb 1 03:25:47 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:25:47 localhost podman[84859]: 2026-02-01 08:25:47.117339977 +0000 UTC m=+0.315601309 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13) Feb 1 03:25:47 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:25:47 localhost systemd[1]: tmp-crun.xTNCie.mount: Deactivated successfully. Feb 1 03:25:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:25:50 localhost systemd[1]: tmp-crun.KpQRtm.mount: Deactivated successfully. Feb 1 03:25:50 localhost podman[84963]: 2026-02-01 08:25:50.865747079 +0000 UTC m=+0.075468119 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:25:51 localhost podman[84963]: 2026-02-01 08:25:51.27286892 +0000 UTC m=+0.482589960 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:25:51 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:25:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:25:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:25:52 localhost podman[84987]: 2026-02-01 08:25:52.863624827 +0000 UTC m=+0.082132017 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.5, version=17.1.13, release=1766032510, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:25:52 localhost podman[84987]: 2026-02-01 08:25:52.903326074 +0000 UTC m=+0.121833344 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64) Feb 1 03:25:52 localhost podman[84988]: 2026-02-01 08:25:52.913437224 +0000 UTC m=+0.129285775 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:25:52 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:25:52 localhost podman[84988]: 2026-02-01 08:25:52.934989292 +0000 UTC m=+0.150837863 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public) Feb 1 03:25:52 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:26:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:26:05 localhost podman[85033]: 2026-02-01 08:26:05.867658153 +0000 UTC m=+0.083658132 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.13, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:26:06 localhost podman[85033]: 2026-02-01 08:26:06.09186373 +0000 UTC m=+0.307863769 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team) Feb 1 03:26:06 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:26:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:26:15 localhost podman[85140]: 2026-02-01 08:26:15.879416136 +0000 UTC m=+0.097531653 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, container_name=collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3) Feb 1 03:26:15 localhost podman[85140]: 2026-02-01 08:26:15.89273287 +0000 UTC m=+0.110848517 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., container_name=collectd, io.openshift.expose-services=) Feb 1 03:26:15 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:26:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:26:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:26:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:26:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:26:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:26:17 localhost systemd[1]: tmp-crun.bJpLwP.mount: Deactivated successfully. Feb 1 03:26:17 localhost podman[85161]: 2026-02-01 08:26:17.87681991 +0000 UTC m=+0.084615311 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, tcib_managed=true, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc.) Feb 1 03:26:17 localhost podman[85169]: 2026-02-01 08:26:17.927957816 +0000 UTC m=+0.122829503 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_ipmi, distribution-scope=public, url=https://www.redhat.com, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 1 03:26:17 localhost systemd[1]: tmp-crun.WorR1d.mount: Deactivated successfully. Feb 1 03:26:17 localhost podman[85161]: 2026-02-01 08:26:17.972846757 +0000 UTC m=+0.180642198 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.13, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:26:17 localhost podman[85160]: 2026-02-01 08:26:17.979937136 +0000 UTC m=+0.192589610 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:26:17 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:26:17 localhost podman[85160]: 2026-02-01 08:26:17.988554523 +0000 UTC m=+0.201207007 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Feb 1 03:26:17 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:26:18 localhost podman[85162]: 2026-02-01 08:26:18.034075512 +0000 UTC m=+0.237913825 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:26:18 localhost podman[85162]: 2026-02-01 08:26:18.071968495 +0000 UTC m=+0.275806848 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T22:34:43Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible) Feb 1 03:26:18 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:26:18 localhost podman[85164]: 2026-02-01 08:26:18.095406671 +0000 UTC m=+0.296739310 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, version=17.1.13, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510) Feb 1 03:26:18 localhost podman[85169]: 2026-02-01 08:26:18.109375515 +0000 UTC m=+0.304247172 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 1 03:26:18 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:26:18 localhost podman[85164]: 2026-02-01 08:26:18.131833451 +0000 UTC m=+0.333166080 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:26:18 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:26:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:26:21 localhost systemd[1]: tmp-crun.ekXTAh.mount: Deactivated successfully. Feb 1 03:26:21 localhost podman[85275]: 2026-02-01 08:26:21.873882963 +0000 UTC m=+0.088827305 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:26:22 localhost podman[85275]: 2026-02-01 08:26:22.238860245 +0000 UTC m=+0.453804637 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:26:22 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:26:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:26:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:26:23 localhost systemd[1]: tmp-crun.lgAG0z.mount: Deactivated successfully. Feb 1 03:26:23 localhost podman[85300]: 2026-02-01 08:26:23.862463305 +0000 UTC m=+0.079221240 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, distribution-scope=public) Feb 1 03:26:23 localhost podman[85299]: 2026-02-01 08:26:23.875684608 +0000 UTC m=+0.090905576 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.13, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510) Feb 1 03:26:23 localhost podman[85300]: 2026-02-01 08:26:23.878652516 +0000 UTC m=+0.095410380 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true) Feb 1 03:26:23 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:26:23 localhost podman[85299]: 2026-02-01 08:26:23.918447135 +0000 UTC m=+0.133668123 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:26:23 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:26:24 localhost systemd[1]: tmp-crun.e0LBfa.mount: Deactivated successfully. Feb 1 03:26:35 localhost sshd[85345]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:26:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:26:36 localhost podman[85369]: 2026-02-01 08:26:36.872075604 +0000 UTC m=+0.085247198 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z) Feb 1 03:26:37 localhost podman[85369]: 2026-02-01 08:26:37.058324407 +0000 UTC m=+0.271495931 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, build-date=2026-01-12T22:10:14Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:26:37 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:26:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:26:46 localhost podman[85420]: 2026-02-01 08:26:46.87135034 +0000 UTC m=+0.086447725 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public) Feb 1 03:26:46 localhost podman[85420]: 2026-02-01 08:26:46.906431749 +0000 UTC m=+0.121529104 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, architecture=x86_64, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Feb 1 03:26:46 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:26:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:26:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:26:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:26:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:26:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:26:48 localhost podman[85442]: 2026-02-01 08:26:48.886076367 +0000 UTC m=+0.090208536 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:26:48 localhost podman[85442]: 2026-02-01 08:26:48.925731473 +0000 UTC m=+0.129863632 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, container_name=iscsid, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.) Feb 1 03:26:48 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:26:48 localhost podman[85441]: 2026-02-01 08:26:48.941162971 +0000 UTC m=+0.146149776 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container) Feb 1 03:26:48 localhost podman[85441]: 2026-02-01 08:26:48.974764567 +0000 UTC m=+0.179751382 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 1 03:26:48 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:26:48 localhost podman[85440]: 2026-02-01 08:26:48.989279027 +0000 UTC m=+0.195717154 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:26:49 localhost podman[85459]: 2026-02-01 08:26:48.906784511 +0000 UTC m=+0.096130021 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 1 03:26:49 localhost podman[85440]: 2026-02-01 08:26:49.026834 +0000 UTC m=+0.233272077 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=) Feb 1 03:26:49 localhost podman[85459]: 2026-02-01 08:26:49.039803255 +0000 UTC m=+0.229148765 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 1 03:26:49 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:26:49 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:26:49 localhost podman[85443]: 2026-02-01 08:26:49.095320161 +0000 UTC m=+0.294089180 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 1 03:26:49 localhost podman[85443]: 2026-02-01 08:26:49.128813585 +0000 UTC m=+0.327582634 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, release=1766032510, managed_by=tripleo_ansible, io.buildah.version=1.41.5) Feb 1 03:26:49 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:26:49 localhost systemd[1]: tmp-crun.5rwM0s.mount: Deactivated successfully. Feb 1 03:26:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:26:52 localhost podman[85555]: 2026-02-01 08:26:52.865100317 +0000 UTC m=+0.083296931 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target) Feb 1 03:26:53 localhost podman[85555]: 2026-02-01 08:26:53.231281595 +0000 UTC m=+0.449478179 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5) Feb 1 03:26:53 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:26:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:26:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:26:54 localhost podman[85579]: 2026-02-01 08:26:54.881863625 +0000 UTC m=+0.071807740 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.13, vcs-type=git) Feb 1 03:26:54 localhost podman[85579]: 2026-02-01 08:26:54.903232709 +0000 UTC m=+0.093176894 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, batch=17.1_20260112.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 1 03:26:54 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:26:54 localhost podman[85578]: 2026-02-01 08:26:54.94715294 +0000 UTC m=+0.136913649 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.13, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team) Feb 1 03:26:55 localhost podman[85578]: 2026-02-01 08:26:55.014594801 +0000 UTC m=+0.204355520 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.5) Feb 1 03:26:55 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:27:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:27:00 localhost recover_tripleo_nova_virtqemud[85626]: 62016 Feb 1 03:27:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:27:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:27:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:27:07 localhost podman[85627]: 2026-02-01 08:27:07.862408125 +0000 UTC m=+0.076484720 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T22:10:14Z, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:27:08 localhost podman[85627]: 2026-02-01 08:27:08.054921762 +0000 UTC m=+0.268998297 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, batch=17.1_20260112.1) Feb 1 03:27:08 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:27:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:27:17 localhost podman[85734]: 2026-02-01 08:27:17.878177456 +0000 UTC m=+0.092285317 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, container_name=collectd) Feb 1 03:27:17 localhost podman[85734]: 2026-02-01 08:27:17.914967797 +0000 UTC m=+0.129075648 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, distribution-scope=public, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:27:17 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:27:19 localhost podman[85754]: 2026-02-01 08:27:19.883532867 +0000 UTC m=+0.096436012 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 1 03:27:19 localhost podman[85755]: 2026-02-01 08:27:19.932369144 +0000 UTC m=+0.139767455 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container) Feb 1 03:27:19 localhost podman[85754]: 2026-02-01 08:27:19.94773525 +0000 UTC m=+0.160638445 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:27:19 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:27:19 localhost podman[85755]: 2026-02-01 08:27:19.995803086 +0000 UTC m=+0.203201407 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vcs-type=git, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5) Feb 1 03:27:20 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:27:20 localhost podman[85756]: 2026-02-01 08:27:20.079359963 +0000 UTC m=+0.284660132 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step3, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:27:20 localhost podman[85756]: 2026-02-01 08:27:20.09375768 +0000 UTC m=+0.299057839 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Feb 1 03:27:20 localhost podman[85763]: 2026-02-01 08:27:20.05193754 +0000 UTC m=+0.251100087 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, version=17.1.13, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 1 03:27:20 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:27:20 localhost podman[85763]: 2026-02-01 08:27:20.135726815 +0000 UTC m=+0.334889382 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, version=17.1.13, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Feb 1 03:27:20 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:27:20 localhost podman[85762]: 2026-02-01 08:27:20.188098808 +0000 UTC m=+0.389997386 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:27:20 localhost podman[85762]: 2026-02-01 08:27:20.216782558 +0000 UTC m=+0.418681096 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.13, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20260112.1) Feb 1 03:27:20 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:27:20 localhost systemd[1]: tmp-crun.KqUW9Y.mount: Deactivated successfully. Feb 1 03:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:27:23 localhost podman[85868]: 2026-02-01 08:27:23.840943886 +0000 UTC m=+0.058361912 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4) Feb 1 03:27:24 localhost podman[85868]: 2026-02-01 08:27:24.210839793 +0000 UTC m=+0.428257779 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:27:24 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:27:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:27:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:27:25 localhost systemd[1]: tmp-crun.BzSUYn.mount: Deactivated successfully. Feb 1 03:27:25 localhost podman[85891]: 2026-02-01 08:27:25.883042423 +0000 UTC m=+0.095784311 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:27:25 localhost podman[85892]: 2026-02-01 08:27:25.915421014 +0000 UTC m=+0.125518463 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team) Feb 1 03:27:25 localhost podman[85891]: 2026-02-01 08:27:25.923747071 +0000 UTC m=+0.136488999 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64) Feb 1 03:27:25 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:27:25 localhost podman[85892]: 2026-02-01 08:27:25.964738976 +0000 UTC m=+0.174836465 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, batch=17.1_20260112.1) Feb 1 03:27:25 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:27:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:27:38 localhost podman[85984]: 2026-02-01 08:27:38.876271909 +0000 UTC m=+0.089267758 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:27:39 localhost podman[85984]: 2026-02-01 08:27:39.077756973 +0000 UTC m=+0.290752752 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z) Feb 1 03:27:39 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:27:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:27:48 localhost podman[86013]: 2026-02-01 08:27:48.862114925 +0000 UTC m=+0.077330134 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Feb 1 03:27:48 localhost podman[86013]: 2026-02-01 08:27:48.870933466 +0000 UTC m=+0.086148615 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, com.redhat.component=openstack-collectd-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:27:48 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:27:50 localhost podman[86042]: 2026-02-01 08:27:50.891656342 +0000 UTC m=+0.091461373 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, vcs-type=git, url=https://www.redhat.com) Feb 1 03:27:50 localhost podman[86042]: 2026-02-01 08:27:50.943640653 +0000 UTC m=+0.143445624 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510) Feb 1 03:27:50 localhost podman[86048]: 2026-02-01 08:27:50.952414603 +0000 UTC m=+0.148139252 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi) Feb 1 03:27:50 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:27:50 localhost podman[86034]: 2026-02-01 08:27:50.925506466 +0000 UTC m=+0.139130177 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, container_name=logrotate_crond, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:27:50 localhost podman[86048]: 2026-02-01 08:27:50.987799092 +0000 UTC m=+0.183523671 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, version=17.1.13, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, tcib_managed=true) Feb 1 03:27:50 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:27:51 localhost podman[86035]: 2026-02-01 08:27:51.035866287 +0000 UTC m=+0.243950444 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, distribution-scope=public, architecture=x86_64) Feb 1 03:27:51 localhost podman[86034]: 2026-02-01 08:27:51.059096325 +0000 UTC m=+0.272720086 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp-rhel9/openstack-cron, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:27:51 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:27:51 localhost podman[86036]: 2026-02-01 08:27:51.075316556 +0000 UTC m=+0.281072263 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Feb 1 03:27:51 localhost podman[86036]: 2026-02-01 08:27:51.08858485 +0000 UTC m=+0.294340537 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=) Feb 1 03:27:51 localhost podman[86035]: 2026-02-01 08:27:51.095679201 +0000 UTC m=+0.303763318 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, distribution-scope=public, container_name=nova_compute, release=1766032510, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:27:51 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:27:51 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:27:54 localhost podman[86152]: 2026-02-01 08:27:54.875707049 +0000 UTC m=+0.087756222 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1766032510, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:27:55 localhost podman[86152]: 2026-02-01 08:27:55.285912302 +0000 UTC m=+0.497961475 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, batch=17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team) Feb 1 03:27:55 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:27:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:27:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:27:56 localhost podman[86176]: 2026-02-01 08:27:56.871331139 +0000 UTC m=+0.085792724 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:27:56 localhost podman[86175]: 2026-02-01 08:27:56.923549388 +0000 UTC m=+0.141545838 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, architecture=x86_64, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public) Feb 1 03:27:56 localhost podman[86176]: 2026-02-01 08:27:56.94588433 +0000 UTC m=+0.160345905 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1) Feb 1 03:27:56 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:27:56 localhost podman[86175]: 2026-02-01 08:27:56.994644426 +0000 UTC m=+0.212640836 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:27:57 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:28:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:28:09 localhost podman[86222]: 2026-02-01 08:28:09.880465115 +0000 UTC m=+0.092526664 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:28:10 localhost podman[86222]: 2026-02-01 08:28:10.085856555 +0000 UTC m=+0.297918074 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, distribution-scope=public, config_id=tripleo_step1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:28:10 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:28:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:28:19 localhost podman[86329]: 2026-02-01 08:28:19.878969954 +0000 UTC m=+0.092414951 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:28:19 localhost podman[86329]: 2026-02-01 08:28:19.91976492 +0000 UTC m=+0.133209917 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, distribution-scope=public, maintainer=OpenStack TripleO Team) Feb 1 03:28:19 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:28:20 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:28:20 localhost recover_tripleo_nova_virtqemud[86350]: 62016 Feb 1 03:28:20 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:28:20 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:28:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:28:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:28:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:28:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:28:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:28:21 localhost systemd[1]: tmp-crun.YeNRij.mount: Deactivated successfully. Feb 1 03:28:21 localhost podman[86353]: 2026-02-01 08:28:21.907263956 +0000 UTC m=+0.115937146 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-type=git, build-date=2026-01-12T23:32:04Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:28:21 localhost podman[86353]: 2026-02-01 08:28:21.940630889 +0000 UTC m=+0.149304139 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step5, vcs-type=git, release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc.) Feb 1 03:28:21 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:28:21 localhost podman[86355]: 2026-02-01 08:28:21.957416174 +0000 UTC m=+0.159452677 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 1 03:28:21 localhost podman[86361]: 2026-02-01 08:28:21.927354944 +0000 UTC m=+0.121416977 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:28:22 localhost podman[86352]: 2026-02-01 08:28:22.007136049 +0000 UTC m=+0.218557575 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.13, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, vendor=Red Hat, Inc.) Feb 1 03:28:22 localhost podman[86355]: 2026-02-01 08:28:22.014686475 +0000 UTC m=+0.216722938 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, build-date=2026-01-12T23:07:47Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 1 03:28:22 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:28:22 localhost podman[86361]: 2026-02-01 08:28:22.061790367 +0000 UTC m=+0.255852380 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 1 03:28:22 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:28:22 localhost podman[86354]: 2026-02-01 08:28:22.106021311 +0000 UTC m=+0.309750426 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.component=openstack-iscsid-container, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 1 03:28:22 localhost podman[86354]: 2026-02-01 08:28:22.11973344 +0000 UTC m=+0.323462575 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, version=17.1.13, managed_by=tripleo_ansible, container_name=iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, batch=17.1_20260112.1, architecture=x86_64, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:28:22 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:28:22 localhost podman[86352]: 2026-02-01 08:28:22.173846231 +0000 UTC m=+0.385267757 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, container_name=logrotate_crond, distribution-scope=public) Feb 1 03:28:22 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:28:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:28:25 localhost podman[86464]: 2026-02-01 08:28:25.87047495 +0000 UTC m=+0.086664211 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, distribution-scope=public) Feb 1 03:28:26 localhost podman[86464]: 2026-02-01 08:28:26.241767139 +0000 UTC m=+0.457956360 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com) Feb 1 03:28:26 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:28:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:28:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:28:27 localhost podman[86488]: 2026-02-01 08:28:27.873880554 +0000 UTC m=+0.086304250 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:28:27 localhost systemd[1]: tmp-crun.MI4lPQ.mount: Deactivated successfully. Feb 1 03:28:27 localhost podman[86489]: 2026-02-01 08:28:27.9370902 +0000 UTC m=+0.147634647 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.5, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 1 03:28:27 localhost podman[86488]: 2026-02-01 08:28:27.952783811 +0000 UTC m=+0.165207527 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, version=17.1.13, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, release=1766032510, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com) Feb 1 03:28:27 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:28:27 localhost podman[86489]: 2026-02-01 08:28:27.965824999 +0000 UTC m=+0.176369406 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, io.buildah.version=1.41.5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:28:27 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:28:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:28:40 localhost podman[86581]: 2026-02-01 08:28:40.879853206 +0000 UTC m=+0.090246493 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, release=1766032510, architecture=x86_64, container_name=metrics_qdr, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Feb 1 03:28:41 localhost podman[86581]: 2026-02-01 08:28:41.065614554 +0000 UTC m=+0.276007791 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:28:41 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:28:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:28:50 localhost podman[86610]: 2026-02-01 08:28:50.873926677 +0000 UTC m=+0.088371744 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, container_name=collectd) Feb 1 03:28:50 localhost podman[86610]: 2026-02-01 08:28:50.887802951 +0000 UTC m=+0.102247998 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, release=1766032510, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, architecture=x86_64, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:28:50 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:28:52 localhost podman[86631]: 2026-02-01 08:28:52.849247633 +0000 UTC m=+0.060908004 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, container_name=nova_compute) Feb 1 03:28:52 localhost systemd[1]: tmp-crun.lChRxQ.mount: Deactivated successfully. Feb 1 03:28:52 localhost podman[86644]: 2026-02-01 08:28:52.893649912 +0000 UTC m=+0.089004574 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 1 03:28:52 localhost podman[86630]: 2026-02-01 08:28:52.908092783 +0000 UTC m=+0.121010454 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:28:52 localhost podman[86630]: 2026-02-01 08:28:52.916589409 +0000 UTC m=+0.129507010 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:28:52 localhost podman[86631]: 2026-02-01 08:28:52.925670904 +0000 UTC m=+0.137331355 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, version=17.1.13, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, tcib_managed=true, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:28:52 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:28:52 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:28:52 localhost podman[86632]: 2026-02-01 08:28:52.918642874 +0000 UTC m=+0.123755591 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, batch=17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid) Feb 1 03:28:53 localhost podman[86632]: 2026-02-01 08:28:53.005125638 +0000 UTC m=+0.210238355 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z) Feb 1 03:28:53 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:28:53 localhost podman[86644]: 2026-02-01 08:28:53.02116962 +0000 UTC m=+0.216524292 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:28:53 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:28:53 localhost podman[86643]: 2026-02-01 08:28:53.1027269 +0000 UTC m=+0.305427711 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, version=17.1.13, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:28:53 localhost podman[86643]: 2026-02-01 08:28:53.131433537 +0000 UTC m=+0.334134368 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1766032510, container_name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com) Feb 1 03:28:53 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:28:53 localhost systemd[1]: tmp-crun.25BoLN.mount: Deactivated successfully. Feb 1 03:28:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:28:56 localhost podman[86741]: 2026-02-01 08:28:56.869252683 +0000 UTC m=+0.081880591 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:28:57 localhost podman[86741]: 2026-02-01 08:28:57.240968317 +0000 UTC m=+0.453596215 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, distribution-scope=public, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:28:57 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:28:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:28:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:28:58 localhost podman[86764]: 2026-02-01 08:28:58.872914596 +0000 UTC m=+0.085922398 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git) Feb 1 03:28:58 localhost podman[86764]: 2026-02-01 08:28:58.916249801 +0000 UTC m=+0.129257683 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, container_name=ovn_metadata_agent, release=1766032510, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com) Feb 1 03:28:58 localhost podman[86765]: 2026-02-01 08:28:58.927452411 +0000 UTC m=+0.137326945 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:28:58 localhost podman[86765]: 2026-02-01 08:28:58.955642392 +0000 UTC m=+0.165516926 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, tcib_managed=true, io.openshift.expose-services=) Feb 1 03:28:58 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:28:58 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:29:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:29:11 localhost podman[86812]: 2026-02-01 08:29:11.881881872 +0000 UTC m=+0.095509308 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:29:12 localhost podman[86812]: 2026-02-01 08:29:12.111986887 +0000 UTC m=+0.325614343 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step1) Feb 1 03:29:12 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:29:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:29:21 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:29:21 localhost recover_tripleo_nova_virtqemud[86919]: 62016 Feb 1 03:29:21 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:29:21 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:29:21 localhost systemd[1]: tmp-crun.TDrMFC.mount: Deactivated successfully. Feb 1 03:29:21 localhost podman[86917]: 2026-02-01 08:29:21.882092977 +0000 UTC m=+0.095323052 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.13, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:29:21 localhost podman[86917]: 2026-02-01 08:29:21.918369361 +0000 UTC m=+0.131599426 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, container_name=collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, release=1766032510) Feb 1 03:29:21 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:29:23 localhost podman[86940]: 2026-02-01 08:29:23.883681795 +0000 UTC m=+0.091189983 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5) Feb 1 03:29:23 localhost podman[86939]: 2026-02-01 08:29:23.933738229 +0000 UTC m=+0.141297739 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=logrotate_crond, release=1766032510, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:29:23 localhost podman[86940]: 2026-02-01 08:29:23.945952582 +0000 UTC m=+0.153460800 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1) Feb 1 03:29:23 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:29:23 localhost systemd[1]: tmp-crun.x6MIEo.mount: Deactivated successfully. Feb 1 03:29:23 localhost podman[86951]: 2026-02-01 08:29:23.997033469 +0000 UTC m=+0.193843512 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:29:24 localhost podman[86941]: 2026-02-01 08:29:24.050594573 +0000 UTC m=+0.253717003 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 1 03:29:24 localhost podman[86951]: 2026-02-01 08:29:24.079203268 +0000 UTC m=+0.276013351 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, release=1766032510, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:29:24 localhost podman[86941]: 2026-02-01 08:29:24.084769462 +0000 UTC m=+0.287891882 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, release=1766032510) Feb 1 03:29:24 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:29:24 localhost podman[86942]: 2026-02-01 08:29:24.111422686 +0000 UTC m=+0.311507501 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 1 03:29:24 localhost podman[86939]: 2026-02-01 08:29:24.120926643 +0000 UTC m=+0.328486153 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 1 03:29:24 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:29:24 localhost podman[86942]: 2026-02-01 08:29:24.142411155 +0000 UTC m=+0.342495940 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Feb 1 03:29:24 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:29:24 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:29:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:29:27 localhost podman[87056]: 2026-02-01 08:29:27.866629316 +0000 UTC m=+0.081191030 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, version=17.1.13, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:29:28 localhost podman[87056]: 2026-02-01 08:29:28.242190609 +0000 UTC m=+0.456752403 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.13, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 1 03:29:28 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:29:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:29:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:29:29 localhost podman[87079]: 2026-02-01 08:29:29.878338489 +0000 UTC m=+0.087090184 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:29:29 localhost podman[87080]: 2026-02-01 08:29:29.935218158 +0000 UTC m=+0.140675149 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, version=17.1.13, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:29:29 localhost podman[87079]: 2026-02-01 08:29:29.949907777 +0000 UTC m=+0.158659472 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, architecture=x86_64, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 1 03:29:29 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:29:29 localhost podman[87080]: 2026-02-01 08:29:29.989733022 +0000 UTC m=+0.195189983 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, batch=17.1_20260112.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 1 03:29:30 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:29:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:29:42 localhost systemd[1]: tmp-crun.Uq7gAf.mount: Deactivated successfully. Feb 1 03:29:42 localhost podman[87172]: 2026-02-01 08:29:42.886977494 +0000 UTC m=+0.103184928 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true) Feb 1 03:29:43 localhost podman[87172]: 2026-02-01 08:29:43.094686139 +0000 UTC m=+0.310893583 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:29:43 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:29:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:29:52 localhost podman[87201]: 2026-02-01 08:29:52.869437444 +0000 UTC m=+0.079220368 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1766032510, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:29:52 localhost podman[87201]: 2026-02-01 08:29:52.877699783 +0000 UTC m=+0.087482697 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, vendor=Red Hat, Inc., container_name=collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:29:52 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:29:54 localhost systemd[1]: tmp-crun.CO8GYH.mount: Deactivated successfully. Feb 1 03:29:54 localhost podman[87221]: 2026-02-01 08:29:54.949119473 +0000 UTC m=+0.154235484 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:29:54 localhost podman[87229]: 2026-02-01 08:29:54.909516065 +0000 UTC m=+0.105128519 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:29:54 localhost podman[87229]: 2026-02-01 08:29:54.995036739 +0000 UTC m=+0.190649153 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T23:07:30Z, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public) Feb 1 03:29:55 localhost podman[87221]: 2026-02-01 08:29:55.004204536 +0000 UTC m=+0.209320507 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step5, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:29:55 localhost podman[87226]: 2026-02-01 08:29:55.009863803 +0000 UTC m=+0.207676556 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:29:55 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:29:55 localhost podman[87226]: 2026-02-01 08:29:55.035749912 +0000 UTC m=+0.233562735 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.5, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64) Feb 1 03:29:55 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:29:55 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:29:55 localhost podman[87222]: 2026-02-01 08:29:55.095091907 +0000 UTC m=+0.297363189 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, release=1766032510) Feb 1 03:29:55 localhost podman[87222]: 2026-02-01 08:29:55.1076512 +0000 UTC m=+0.309922492 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z) Feb 1 03:29:55 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:29:55 localhost podman[87220]: 2026-02-01 08:29:55.186453815 +0000 UTC m=+0.396539511 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:29:55 localhost podman[87220]: 2026-02-01 08:29:55.22471845 +0000 UTC m=+0.434804086 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, release=1766032510) Feb 1 03:29:55 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:29:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:29:58 localhost systemd[1]: tmp-crun.4i2Pyl.mount: Deactivated successfully. Feb 1 03:29:58 localhost podman[87335]: 2026-02-01 08:29:58.871926295 +0000 UTC m=+0.088124696 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:29:59 localhost podman[87335]: 2026-02-01 08:29:59.232612023 +0000 UTC m=+0.448810364 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, version=17.1.13) Feb 1 03:29:59 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:30:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:30:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:30:00 localhost podman[87358]: 2026-02-01 08:30:00.862945732 +0000 UTC m=+0.080765817 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:30:00 localhost systemd[1]: tmp-crun.C2FtgG.mount: Deactivated successfully. Feb 1 03:30:00 localhost podman[87359]: 2026-02-01 08:30:00.919560312 +0000 UTC m=+0.133106803 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com) Feb 1 03:30:00 localhost podman[87358]: 2026-02-01 08:30:00.930618837 +0000 UTC m=+0.148438932 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:30:00 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:30:00 localhost podman[87359]: 2026-02-01 08:30:00.972677192 +0000 UTC m=+0.186223703 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, vcs-type=git, version=17.1.13, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:30:00 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:30:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:30:13 localhost podman[87404]: 2026-02-01 08:30:13.888628288 +0000 UTC m=+0.103437775 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:30:14 localhost podman[87404]: 2026-02-01 08:30:14.061692069 +0000 UTC m=+0.276501486 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, architecture=x86_64, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 1 03:30:14 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:30:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:30:23 localhost podman[87509]: 2026-02-01 08:30:23.872569113 +0000 UTC m=+0.082404528 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, release=1766032510, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:30:23 localhost podman[87509]: 2026-02-01 08:30:23.882466422 +0000 UTC m=+0.092301847 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vcs-type=git, container_name=collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.buildah.version=1.41.5) Feb 1 03:30:23 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:30:25 localhost systemd[1]: tmp-crun.SB2UB3.mount: Deactivated successfully. Feb 1 03:30:25 localhost podman[87529]: 2026-02-01 08:30:25.891717399 +0000 UTC m=+0.106126420 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, release=1766032510, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:30:25 localhost podman[87529]: 2026-02-01 08:30:25.895082754 +0000 UTC m=+0.109491775 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z) Feb 1 03:30:25 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:30:25 localhost systemd[1]: tmp-crun.kxuJip.mount: Deactivated successfully. Feb 1 03:30:25 localhost podman[87530]: 2026-02-01 08:30:25.938048018 +0000 UTC m=+0.146350297 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, tcib_managed=true, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Feb 1 03:30:25 localhost podman[87530]: 2026-02-01 08:30:25.963611737 +0000 UTC m=+0.171914026 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=nova_compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:30:25 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:30:26 localhost podman[87531]: 2026-02-01 08:30:26.048706817 +0000 UTC m=+0.253329611 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, batch=17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com) Feb 1 03:30:26 localhost podman[87537]: 2026-02-01 08:30:26.101936192 +0000 UTC m=+0.303781160 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, tcib_managed=true, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:30:26 localhost podman[87531]: 2026-02-01 08:30:26.134252733 +0000 UTC m=+0.338875577 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, build-date=2026-01-12T22:34:43Z, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:30:26 localhost podman[87537]: 2026-02-01 08:30:26.152032188 +0000 UTC m=+0.353877136 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:30:26 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:30:26 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:30:26 localhost podman[87538]: 2026-02-01 08:30:26.151799931 +0000 UTC m=+0.351195702 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 1 03:30:26 localhost podman[87538]: 2026-02-01 08:30:26.231247415 +0000 UTC m=+0.430643236 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:30:26 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:30:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:30:29 localhost systemd[1]: tmp-crun.Z4sw8O.mount: Deactivated successfully. Feb 1 03:30:29 localhost podman[87649]: 2026-02-01 08:30:29.861720345 +0000 UTC m=+0.075748079 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, architecture=x86_64, release=1766032510, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.13) Feb 1 03:30:30 localhost podman[87649]: 2026-02-01 08:30:30.232169929 +0000 UTC m=+0.446197663 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, version=17.1.13, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:30:30 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:30:31 localhost systemd[1]: tmp-crun.gZjAFG.mount: Deactivated successfully. Feb 1 03:30:31 localhost podman[87672]: 2026-02-01 08:30:31.884609959 +0000 UTC m=+0.095506998 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, build-date=2026-01-12T22:36:40Z) Feb 1 03:30:31 localhost podman[87672]: 2026-02-01 08:30:31.903720806 +0000 UTC m=+0.114617815 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.5, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4) Feb 1 03:30:31 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:30:31 localhost podman[87671]: 2026-02-01 08:30:31.970231756 +0000 UTC m=+0.184548602 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:30:32 localhost podman[87671]: 2026-02-01 08:30:32.009816683 +0000 UTC m=+0.224133549 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:30:32 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:30:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:30:44 localhost podman[87768]: 2026-02-01 08:30:44.870614175 +0000 UTC m=+0.084841143 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, architecture=x86_64, version=17.1.13, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z) Feb 1 03:30:45 localhost podman[87768]: 2026-02-01 08:30:45.065859791 +0000 UTC m=+0.280086809 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Feb 1 03:30:45 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:30:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:30:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5433 writes, 23K keys, 5433 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5433 writes, 751 syncs, 7.23 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 487 writes, 1908 keys, 487 commit groups, 1.0 writes per commit group, ingest: 2.24 MB, 0.00 MB/s#012Interval WAL: 487 writes, 193 syncs, 2.52 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:30:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:30:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5223 writes, 23K keys, 5223 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5223 writes, 658 syncs, 7.94 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 489 writes, 1859 keys, 489 commit groups, 1.0 writes per commit group, ingest: 2.34 MB, 0.00 MB/s#012Interval WAL: 489 writes, 177 syncs, 2.76 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:30:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:30:54 localhost podman[87798]: 2026-02-01 08:30:54.874223807 +0000 UTC m=+0.085324849 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:30:54 localhost podman[87798]: 2026-02-01 08:30:54.881122832 +0000 UTC m=+0.092223904 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, container_name=collectd, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1) Feb 1 03:30:54 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:30:56 localhost podman[87819]: 2026-02-01 08:30:56.888850091 +0000 UTC m=+0.091832012 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 1 03:30:56 localhost podman[87819]: 2026-02-01 08:30:56.941609511 +0000 UTC m=+0.144591422 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Feb 1 03:30:56 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:30:56 localhost systemd[1]: tmp-crun.5V94io.mount: Deactivated successfully. Feb 1 03:30:56 localhost podman[87818]: 2026-02-01 08:30:56.945847494 +0000 UTC m=+0.151015273 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:30:57 localhost podman[87818]: 2026-02-01 08:30:57.028676794 +0000 UTC m=+0.233844583 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, name=rhosp-rhel9/openstack-cron, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:30:57 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:30:57 localhost podman[87820]: 2026-02-01 08:30:56.996518029 +0000 UTC m=+0.196410464 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=iscsid, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:30:57 localhost podman[87821]: 2026-02-01 08:30:57.103368209 +0000 UTC m=+0.301589681 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., release=1766032510, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:30:57 localhost podman[87820]: 2026-02-01 08:30:57.130401434 +0000 UTC m=+0.330293869 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, release=1766032510, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=iscsid, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 1 03:30:57 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:30:57 localhost podman[87821]: 2026-02-01 08:30:57.186262741 +0000 UTC m=+0.384484193 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible) Feb 1 03:30:57 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:30:57 localhost podman[87828]: 2026-02-01 08:30:57.204364537 +0000 UTC m=+0.396356135 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public) Feb 1 03:30:57 localhost podman[87828]: 2026-02-01 08:30:57.236712778 +0000 UTC m=+0.428704406 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc.) Feb 1 03:30:57 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:31:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:31:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:31:00 localhost recover_tripleo_nova_virtqemud[87939]: 62016 Feb 1 03:31:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:31:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:31:00 localhost systemd[1]: tmp-crun.LVbsWN.mount: Deactivated successfully. Feb 1 03:31:00 localhost podman[87935]: 2026-02-01 08:31:00.870720531 +0000 UTC m=+0.083599106 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, release=1766032510, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:31:01 localhost podman[87935]: 2026-02-01 08:31:01.269980094 +0000 UTC m=+0.482858659 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-type=git, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:31:01 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:31:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:31:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:31:02 localhost systemd[1]: tmp-crun.9etjsi.mount: Deactivated successfully. Feb 1 03:31:02 localhost podman[87962]: 2026-02-01 08:31:02.890861688 +0000 UTC m=+0.102909059 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, architecture=x86_64, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, io.buildah.version=1.41.5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 1 03:31:02 localhost podman[87961]: 2026-02-01 08:31:02.932268602 +0000 UTC m=+0.147093490 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:31:02 localhost podman[87961]: 2026-02-01 08:31:02.971392436 +0000 UTC m=+0.186217344 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2026-01-12T22:56:19Z, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 1 03:31:02 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:31:03 localhost podman[87962]: 2026-02-01 08:31:03.021719239 +0000 UTC m=+0.233766570 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-type=git, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_controller) Feb 1 03:31:03 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:31:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:31:15 localhost systemd[1]: tmp-crun.PXHTIl.mount: Deactivated successfully. Feb 1 03:31:15 localhost podman[88009]: 2026-02-01 08:31:15.889269231 +0000 UTC m=+0.101043425 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:31:16 localhost podman[88009]: 2026-02-01 08:31:16.090646807 +0000 UTC m=+0.302420991 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:31:16 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:31:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:31:25 localhost podman[88115]: 2026-02-01 08:31:25.88675953 +0000 UTC m=+0.100003312 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1766032510, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:31:25 localhost podman[88115]: 2026-02-01 08:31:25.901731564 +0000 UTC m=+0.114975306 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, container_name=collectd) Feb 1 03:31:25 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:31:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:31:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:31:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:31:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:31:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:31:27 localhost systemd[1]: tmp-crun.24LqTe.mount: Deactivated successfully. Feb 1 03:31:27 localhost podman[88150]: 2026-02-01 08:31:27.918472178 +0000 UTC m=+0.108806625 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T23:07:30Z, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 1 03:31:27 localhost podman[88136]: 2026-02-01 08:31:27.941682426 +0000 UTC m=+0.149056590 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-cron-container, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20260112.1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:31:27 localhost podman[88136]: 2026-02-01 08:31:27.94958357 +0000 UTC m=+0.156957734 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 1 03:31:27 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:31:27 localhost podman[88138]: 2026-02-01 08:31:27.996923284 +0000 UTC m=+0.198945552 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:31:28 localhost podman[88141]: 2026-02-01 08:31:27.904524667 +0000 UTC m=+0.103787910 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z) Feb 1 03:31:28 localhost podman[88138]: 2026-02-01 08:31:28.006629293 +0000 UTC m=+0.208651531 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:31:28 localhost podman[88141]: 2026-02-01 08:31:28.035687952 +0000 UTC m=+0.234951215 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5) Feb 1 03:31:28 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:31:28 localhost podman[88150]: 2026-02-01 08:31:28.048149847 +0000 UTC m=+0.238484314 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13) Feb 1 03:31:28 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:31:28 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:31:28 localhost podman[88137]: 2026-02-01 08:31:28.141319658 +0000 UTC m=+0.346859606 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:31:28 localhost podman[88137]: 2026-02-01 08:31:28.202816619 +0000 UTC m=+0.408356587 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, container_name=nova_compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, vcs-type=git, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:31:28 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:31:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:31:31 localhost systemd[1]: tmp-crun.vT3Bqw.mount: Deactivated successfully. Feb 1 03:31:31 localhost podman[88252]: 2026-02-01 08:31:31.862086596 +0000 UTC m=+0.083216433 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, version=17.1.13, container_name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Feb 1 03:31:32 localhost podman[88252]: 2026-02-01 08:31:32.245659896 +0000 UTC m=+0.466789763 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true) Feb 1 03:31:32 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:31:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:31:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:31:33 localhost podman[88278]: 2026-02-01 08:31:33.88301697 +0000 UTC m=+0.093047997 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 1 03:31:33 localhost podman[88277]: 2026-02-01 08:31:33.854582342 +0000 UTC m=+0.070725379 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, tcib_managed=true, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:31:33 localhost podman[88278]: 2026-02-01 08:31:33.932784589 +0000 UTC m=+0.142815596 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, version=17.1.13, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:31:33 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:31:33 localhost podman[88277]: 2026-02-01 08:31:33.989154692 +0000 UTC m=+0.205297729 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.13, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:31:34 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:31:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:31:46 localhost systemd[1]: tmp-crun.zNObkr.mount: Deactivated successfully. Feb 1 03:31:46 localhost podman[88369]: 2026-02-01 08:31:46.881809137 +0000 UTC m=+0.095478263 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible) Feb 1 03:31:47 localhost podman[88369]: 2026-02-01 08:31:47.069363726 +0000 UTC m=+0.283032862 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:31:47 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:31:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:31:56 localhost podman[88400]: 2026-02-01 08:31:56.845424103 +0000 UTC m=+0.068624173 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, container_name=collectd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com) Feb 1 03:31:56 localhost podman[88400]: 2026-02-01 08:31:56.88382695 +0000 UTC m=+0.107027030 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:31:56 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:31:58 localhost podman[88422]: 2026-02-01 08:31:58.873968581 +0000 UTC m=+0.083972197 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:31:58 localhost podman[88422]: 2026-02-01 08:31:58.882574357 +0000 UTC m=+0.092577963 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:31:58 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:31:58 localhost podman[88421]: 2026-02-01 08:31:58.928792215 +0000 UTC m=+0.142212128 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=nova_compute, url=https://www.redhat.com, version=17.1.13, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:31:58 localhost podman[88429]: 2026-02-01 08:31:58.989514213 +0000 UTC m=+0.194624709 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 1 03:31:59 localhost podman[88429]: 2026-02-01 08:31:59.020611635 +0000 UTC m=+0.225722151 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:31:59 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:31:59 localhost podman[88423]: 2026-02-01 08:31:59.038202588 +0000 UTC m=+0.245299755 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13) Feb 1 03:31:59 localhost podman[88421]: 2026-02-01 08:31:59.060087875 +0000 UTC m=+0.273507788 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.buildah.version=1.41.5, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 1 03:31:59 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:31:59 localhost podman[88423]: 2026-02-01 08:31:59.075626616 +0000 UTC m=+0.282723793 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:31:59 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:31:59 localhost podman[88420]: 2026-02-01 08:31:59.138245992 +0000 UTC m=+0.355167853 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true) Feb 1 03:31:59 localhost podman[88420]: 2026-02-01 08:31:59.150544941 +0000 UTC m=+0.367466802 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 1 03:31:59 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:31:59 localhost systemd[1]: tmp-crun.UqPgBM.mount: Deactivated successfully. Feb 1 03:32:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:32:02 localhost podman[88539]: 2026-02-01 08:32:02.855560424 +0000 UTC m=+0.072315877 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container) Feb 1 03:32:03 localhost podman[88539]: 2026-02-01 08:32:03.188654242 +0000 UTC m=+0.405409695 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:32:03 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:32:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:32:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:32:04 localhost podman[88562]: 2026-02-01 08:32:04.860069258 +0000 UTC m=+0.074830684 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, container_name=ovn_metadata_agent, vcs-type=git, tcib_managed=true, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:32:04 localhost podman[88562]: 2026-02-01 08:32:04.902687976 +0000 UTC m=+0.117449362 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.13, config_id=tripleo_step4, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:32:04 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:32:04 localhost podman[88563]: 2026-02-01 08:32:04.920537408 +0000 UTC m=+0.131625891 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public) Feb 1 03:32:04 localhost podman[88563]: 2026-02-01 08:32:04.946676216 +0000 UTC m=+0.157764689 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:32:04 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:32:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:32:17 localhost systemd[1]: tmp-crun.nzXF1C.mount: Deactivated successfully. Feb 1 03:32:17 localhost podman[88609]: 2026-02-01 08:32:17.878094271 +0000 UTC m=+0.094045548 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, architecture=x86_64, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Feb 1 03:32:18 localhost podman[88609]: 2026-02-01 08:32:18.068531819 +0000 UTC m=+0.284483046 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, architecture=x86_64, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, container_name=metrics_qdr, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5) Feb 1 03:32:18 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:32:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:32:27 localhost podman[88766]: 2026-02-01 08:32:27.868043903 +0000 UTC m=+0.080207222 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, tcib_managed=true) Feb 1 03:32:27 localhost podman[88766]: 2026-02-01 08:32:27.885243454 +0000 UTC m=+0.097406773 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container) Feb 1 03:32:27 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:32:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:32:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:32:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:32:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:32:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:32:29 localhost podman[88787]: 2026-02-01 08:32:29.930468228 +0000 UTC m=+0.140538096 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, container_name=nova_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13) Feb 1 03:32:29 localhost podman[88786]: 2026-02-01 08:32:29.888280144 +0000 UTC m=+0.100718585 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, release=1766032510, version=17.1.13) Feb 1 03:32:29 localhost podman[88786]: 2026-02-01 08:32:29.974091607 +0000 UTC m=+0.186530048 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, release=1766032510, config_id=tripleo_step4) Feb 1 03:32:29 localhost podman[88787]: 2026-02-01 08:32:29.983531469 +0000 UTC m=+0.193601287 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5) Feb 1 03:32:29 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:32:29 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:32:30 localhost podman[88788]: 2026-02-01 08:32:30.02432611 +0000 UTC m=+0.229528148 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible) Feb 1 03:32:30 localhost podman[88790]: 2026-02-01 08:32:29.979216425 +0000 UTC m=+0.180855342 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Feb 1 03:32:30 localhost podman[88790]: 2026-02-01 08:32:30.060171448 +0000 UTC m=+0.261810325 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team) Feb 1 03:32:30 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:32:30 localhost podman[88800]: 2026-02-01 08:32:30.073454189 +0000 UTC m=+0.271652970 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc.) Feb 1 03:32:30 localhost podman[88788]: 2026-02-01 08:32:30.083758057 +0000 UTC m=+0.288960065 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, container_name=iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:32:30 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:32:30 localhost podman[88800]: 2026-02-01 08:32:30.130711469 +0000 UTC m=+0.328910230 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc.) Feb 1 03:32:30 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:32:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:32:33 localhost podman[88903]: 2026-02-01 08:32:33.859400822 +0000 UTC m=+0.074888586 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step4, batch=17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:32:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:32:34 localhost podman[88903]: 2026-02-01 08:32:34.249134742 +0000 UTC m=+0.464622456 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:32:34 localhost recover_tripleo_nova_virtqemud[88925]: 62016 Feb 1 03:32:34 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:32:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:32:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:32:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:32:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:32:35 localhost podman[88928]: 2026-02-01 08:32:35.867622442 +0000 UTC m=+0.084456472 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.buildah.version=1.41.5, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, container_name=ovn_metadata_agent) Feb 1 03:32:35 localhost podman[88929]: 2026-02-01 08:32:35.91900754 +0000 UTC m=+0.132195768 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, config_id=tripleo_step4) Feb 1 03:32:35 localhost podman[88928]: 2026-02-01 08:32:35.93773602 +0000 UTC m=+0.154569980 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:32:35 localhost podman[88929]: 2026-02-01 08:32:35.945969084 +0000 UTC m=+0.159157362 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, architecture=x86_64, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true) Feb 1 03:32:35 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:32:35 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:32:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:32:48 localhost podman[88998]: 2026-02-01 08:32:48.866744119 +0000 UTC m=+0.081859562 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, release=1766032510, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, distribution-scope=public, version=17.1.13, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:32:49 localhost podman[88998]: 2026-02-01 08:32:49.083907944 +0000 UTC m=+0.299023417 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, vcs-type=git, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:32:49 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:32:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:32:58 localhost podman[89027]: 2026-02-01 08:32:58.860117604 +0000 UTC m=+0.074229176 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5) Feb 1 03:32:58 localhost podman[89027]: 2026-02-01 08:32:58.867691568 +0000 UTC m=+0.081803110 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:32:58 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:33:00 localhost podman[89051]: 2026-02-01 08:33:00.885591458 +0000 UTC m=+0.089376225 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:33:00 localhost systemd[1]: tmp-crun.FTXYpK.mount: Deactivated successfully. Feb 1 03:33:00 localhost podman[89056]: 2026-02-01 08:33:00.943388624 +0000 UTC m=+0.144031344 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi) Feb 1 03:33:00 localhost podman[89051]: 2026-02-01 08:33:00.947800241 +0000 UTC m=+0.151585008 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1766032510) Feb 1 03:33:00 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:33:00 localhost podman[89046]: 2026-02-01 08:33:00.995660891 +0000 UTC m=+0.205655760 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:33:01 localhost podman[89047]: 2026-02-01 08:33:01.034252374 +0000 UTC m=+0.240375093 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, release=1766032510, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=iscsid, vcs-type=git) Feb 1 03:33:01 localhost podman[89046]: 2026-02-01 08:33:01.048973589 +0000 UTC m=+0.258968508 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:33:01 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:33:01 localhost podman[89047]: 2026-02-01 08:33:01.069669098 +0000 UTC m=+0.275791847 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:33:01 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:33:01 localhost podman[89056]: 2026-02-01 08:33:01.105765644 +0000 UTC m=+0.306408404 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, container_name=ceilometer_agent_ipmi, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 1 03:33:01 localhost podman[89045]: 2026-02-01 08:33:01.140189049 +0000 UTC m=+0.353404028 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, release=1766032510, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team) Feb 1 03:33:01 localhost podman[89045]: 2026-02-01 08:33:01.151599892 +0000 UTC m=+0.364814891 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, container_name=logrotate_crond, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 1 03:33:01 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:33:01 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:33:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:33:04 localhost systemd[1]: tmp-crun.rSaXxi.mount: Deactivated successfully. Feb 1 03:33:04 localhost podman[89160]: 2026-02-01 08:33:04.864495556 +0000 UTC m=+0.084423681 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, container_name=nova_migration_target, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5) Feb 1 03:33:05 localhost podman[89160]: 2026-02-01 08:33:05.220865124 +0000 UTC m=+0.440793239 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:33:05 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:33:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:33:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:33:06 localhost podman[89183]: 2026-02-01 08:33:06.858194127 +0000 UTC m=+0.075943918 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 1 03:33:06 localhost podman[89184]: 2026-02-01 08:33:06.92071635 +0000 UTC m=+0.133005363 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:33:06 localhost podman[89183]: 2026-02-01 08:33:06.940007337 +0000 UTC m=+0.157757108 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, batch=17.1_20260112.1, vcs-type=git) Feb 1 03:33:06 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:33:06 localhost podman[89184]: 2026-02-01 08:33:06.992704056 +0000 UTC m=+0.204993029 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, version=17.1.13, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 1 03:33:07 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:33:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:33:19 localhost podman[89230]: 2026-02-01 08:33:19.875687352 +0000 UTC m=+0.089533229 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-12T22:10:14Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git) Feb 1 03:33:20 localhost podman[89230]: 2026-02-01 08:33:20.100254266 +0000 UTC m=+0.314100123 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Feb 1 03:33:20 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:33:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:33:29 localhost systemd[1]: tmp-crun.rUKqON.mount: Deactivated successfully. Feb 1 03:33:29 localhost podman[89335]: 2026-02-01 08:33:29.875839815 +0000 UTC m=+0.089475018 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, container_name=collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13) Feb 1 03:33:29 localhost podman[89335]: 2026-02-01 08:33:29.889636722 +0000 UTC m=+0.103271905 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, version=17.1.13, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:33:29 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:33:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:33:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:33:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:33:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:33:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:33:31 localhost podman[89357]: 2026-02-01 08:33:31.87837227 +0000 UTC m=+0.089505509 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:33:31 localhost systemd[1]: tmp-crun.wn7vMK.mount: Deactivated successfully. Feb 1 03:33:31 localhost podman[89358]: 2026-02-01 08:33:31.928970214 +0000 UTC m=+0.138791873 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:33:31 localhost podman[89357]: 2026-02-01 08:33:31.936728704 +0000 UTC m=+0.147861943 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-type=git, release=1766032510, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, distribution-scope=public) Feb 1 03:33:31 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:33:31 localhost podman[89358]: 2026-02-01 08:33:31.967786474 +0000 UTC m=+0.177608153 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, version=17.1.13) Feb 1 03:33:31 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:33:31 localhost podman[89359]: 2026-02-01 08:33:31.983618994 +0000 UTC m=+0.189462339 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container) Feb 1 03:33:32 localhost podman[89359]: 2026-02-01 08:33:32.034663672 +0000 UTC m=+0.240507027 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:33:32 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:33:32 localhost podman[89356]: 2026-02-01 08:33:32.038459659 +0000 UTC m=+0.246540974 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public) Feb 1 03:33:32 localhost podman[89360]: 2026-02-01 08:33:32.089030162 +0000 UTC m=+0.292384170 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com) Feb 1 03:33:32 localhost podman[89356]: 2026-02-01 08:33:32.118922047 +0000 UTC m=+0.327003312 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vcs-type=git, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, container_name=logrotate_crond, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team) Feb 1 03:33:32 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:33:32 localhost podman[89360]: 2026-02-01 08:33:32.17172459 +0000 UTC m=+0.375078608 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 1 03:33:32 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:33:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:33:35 localhost podman[89477]: 2026-02-01 08:33:35.873151241 +0000 UTC m=+0.085674700 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:33:36 localhost podman[89477]: 2026-02-01 08:33:36.244599866 +0000 UTC m=+0.457123295 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, container_name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:33:36 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:33:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:33:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:33:37 localhost systemd[1]: tmp-crun.d8ASk8.mount: Deactivated successfully. Feb 1 03:33:37 localhost podman[89500]: 2026-02-01 08:33:37.865238292 +0000 UTC m=+0.083049749 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container) Feb 1 03:33:37 localhost podman[89499]: 2026-02-01 08:33:37.910003036 +0000 UTC m=+0.130175635 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:33:37 localhost podman[89500]: 2026-02-01 08:33:37.913612537 +0000 UTC m=+0.131424024 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=ovn_controller, config_id=tripleo_step4) Feb 1 03:33:37 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:33:37 localhost podman[89499]: 2026-02-01 08:33:37.957775223 +0000 UTC m=+0.177947872 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T22:56:19Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:33:37 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:33:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:33:50 localhost systemd[1]: tmp-crun.W6r7qe.mount: Deactivated successfully. Feb 1 03:33:50 localhost podman[89570]: 2026-02-01 08:33:50.872935503 +0000 UTC m=+0.092754349 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:33:51 localhost podman[89570]: 2026-02-01 08:33:51.086892828 +0000 UTC m=+0.306711664 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_id=tripleo_step1, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z) Feb 1 03:33:51 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:34:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:34:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:34:00 localhost recover_tripleo_nova_virtqemud[89606]: 62016 Feb 1 03:34:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:34:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:34:00 localhost systemd[1]: tmp-crun.3I1ySj.mount: Deactivated successfully. Feb 1 03:34:00 localhost podman[89600]: 2026-02-01 08:34:00.865424023 +0000 UTC m=+0.076250988 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:34:00 localhost podman[89600]: 2026-02-01 08:34:00.901353614 +0000 UTC m=+0.112180609 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-collectd-container, distribution-scope=public, release=1766032510, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, architecture=x86_64) Feb 1 03:34:00 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:34:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:34:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:34:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:34:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:34:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:34:02 localhost podman[89623]: 2026-02-01 08:34:02.886621655 +0000 UTC m=+0.089684604 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, version=17.1.13, vcs-type=git, container_name=nova_compute) Feb 1 03:34:02 localhost podman[89623]: 2026-02-01 08:34:02.911618968 +0000 UTC m=+0.114681897 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc.) Feb 1 03:34:02 localhost systemd[1]: tmp-crun.5qEowd.mount: Deactivated successfully. Feb 1 03:34:02 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:34:02 localhost podman[89625]: 2026-02-01 08:34:02.939367976 +0000 UTC m=+0.139186474 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:34:02 localhost podman[89625]: 2026-02-01 08:34:02.995729749 +0000 UTC m=+0.195548277 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, distribution-scope=public) Feb 1 03:34:02 localhost podman[89622]: 2026-02-01 08:34:02.995688247 +0000 UTC m=+0.204058970 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 1 03:34:03 localhost podman[89622]: 2026-02-01 08:34:03.032853306 +0000 UTC m=+0.241224029 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, architecture=x86_64, version=17.1.13, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:34:03 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:34:03 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:34:03 localhost podman[89631]: 2026-02-01 08:34:03.059724047 +0000 UTC m=+0.255081607 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, io.openshift.expose-services=) Feb 1 03:34:03 localhost podman[89631]: 2026-02-01 08:34:03.08891474 +0000 UTC m=+0.284272350 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-type=git, release=1766032510, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, version=17.1.13) Feb 1 03:34:03 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:34:03 localhost podman[89624]: 2026-02-01 08:34:03.137856303 +0000 UTC m=+0.337515027 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step3) Feb 1 03:34:03 localhost podman[89624]: 2026-02-01 08:34:03.175792996 +0000 UTC m=+0.375451740 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:34:03 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:34:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:34:06 localhost podman[89738]: 2026-02-01 08:34:06.862987986 +0000 UTC m=+0.077839068 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:34:07 localhost podman[89738]: 2026-02-01 08:34:07.24704728 +0000 UTC m=+0.461898372 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, config_id=tripleo_step4, distribution-scope=public) Feb 1 03:34:07 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:34:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:34:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:34:08 localhost podman[89761]: 2026-02-01 08:34:08.8658427 +0000 UTC m=+0.079871430 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:34:08 localhost podman[89762]: 2026-02-01 08:34:08.918696244 +0000 UTC m=+0.129629548 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller) Feb 1 03:34:08 localhost podman[89762]: 2026-02-01 08:34:08.946632268 +0000 UTC m=+0.157565572 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-type=git, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.openshift.expose-services=, architecture=x86_64) Feb 1 03:34:08 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:34:08 localhost podman[89761]: 2026-02-01 08:34:08.997206422 +0000 UTC m=+0.211235082 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:34:09 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:34:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:34:21 localhost podman[89808]: 2026-02-01 08:34:21.905331065 +0000 UTC m=+0.119648681 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 1 03:34:22 localhost podman[89808]: 2026-02-01 08:34:22.087197148 +0000 UTC m=+0.301514764 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5) Feb 1 03:34:22 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:34:25 localhost podman[89938]: 2026-02-01 08:34:25.351617307 +0000 UTC m=+0.103225683 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, ceph=True, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 03:34:25 localhost podman[89938]: 2026-02-01 08:34:25.451768743 +0000 UTC m=+0.203377109 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git) Feb 1 03:34:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:34:31 localhost podman[90086]: 2026-02-01 08:34:31.921211086 +0000 UTC m=+0.124052877 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, release=1766032510, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true) Feb 1 03:34:31 localhost podman[90086]: 2026-02-01 08:34:31.956569079 +0000 UTC m=+0.159410850 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.13, config_id=tripleo_step3, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:34:31 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:34:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:34:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:34:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:34:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:34:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:34:33 localhost systemd[1]: tmp-crun.h3lRVi.mount: Deactivated successfully. Feb 1 03:34:33 localhost podman[90109]: 2026-02-01 08:34:33.869334318 +0000 UTC m=+0.081667277 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:34:33 localhost podman[90107]: 2026-02-01 08:34:33.913761421 +0000 UTC m=+0.130535436 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, release=1766032510, com.redhat.component=openstack-cron-container, vcs-type=git, name=rhosp-rhel9/openstack-cron, distribution-scope=public, version=17.1.13, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com) Feb 1 03:34:33 localhost podman[90108]: 2026-02-01 08:34:33.92373823 +0000 UTC m=+0.134455848 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.buildah.version=1.41.5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:34:33 localhost podman[90107]: 2026-02-01 08:34:33.932655525 +0000 UTC m=+0.149429550 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1766032510, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=) Feb 1 03:34:33 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:34:33 localhost podman[90108]: 2026-02-01 08:34:33.953599203 +0000 UTC m=+0.164316811 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, release=1766032510, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:34:33 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:34:33 localhost podman[90109]: 2026-02-01 08:34:33.985332424 +0000 UTC m=+0.197665433 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.13, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git) Feb 1 03:34:33 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:34:34 localhost podman[90110]: 2026-02-01 08:34:33.885737684 +0000 UTC m=+0.091065766 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1766032510, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, container_name=ceilometer_agent_compute, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:34:34 localhost podman[90110]: 2026-02-01 08:34:34.06603306 +0000 UTC m=+0.271361092 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4) Feb 1 03:34:34 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:34:34 localhost podman[90111]: 2026-02-01 08:34:34.074373687 +0000 UTC m=+0.281146464 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc.) Feb 1 03:34:34 localhost podman[90111]: 2026-02-01 08:34:34.095724337 +0000 UTC m=+0.302497124 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 1 03:34:34 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:34:37 localhost podman[90225]: 2026-02-01 08:34:37.854651256 +0000 UTC m=+0.067863159 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1766032510, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:34:38 localhost podman[90225]: 2026-02-01 08:34:38.253829198 +0000 UTC m=+0.467041131 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.13, release=1766032510, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:34:38 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:34:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:34:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:34:39 localhost podman[90249]: 2026-02-01 08:34:39.872742041 +0000 UTC m=+0.081916304 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:34:39 localhost podman[90249]: 2026-02-01 08:34:39.926904186 +0000 UTC m=+0.136078439 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:34:39 localhost systemd[1]: tmp-crun.tXxdHd.mount: Deactivated successfully. Feb 1 03:34:39 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:34:39 localhost podman[90250]: 2026-02-01 08:34:39.954075816 +0000 UTC m=+0.156715046 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:36:40Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 1 03:34:39 localhost podman[90250]: 2026-02-01 08:34:39.978546452 +0000 UTC m=+0.181185672 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, url=https://www.redhat.com, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true) Feb 1 03:34:39 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:34:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:34:52 localhost podman[90320]: 2026-02-01 08:34:52.875677812 +0000 UTC m=+0.087619920 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vendor=Red Hat, Inc.) Feb 1 03:34:53 localhost podman[90320]: 2026-02-01 08:34:53.095733426 +0000 UTC m=+0.307675524 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git) Feb 1 03:34:53 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:35:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:35:02 localhost systemd[1]: tmp-crun.Ygy6M1.mount: Deactivated successfully. Feb 1 03:35:02 localhost podman[90349]: 2026-02-01 08:35:02.875388203 +0000 UTC m=+0.091375997 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:35:02 localhost podman[90349]: 2026-02-01 08:35:02.918614199 +0000 UTC m=+0.134602053 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5) Feb 1 03:35:02 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:35:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:35:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:35:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:35:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:35:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:35:04 localhost podman[90369]: 2026-02-01 08:35:04.88805563 +0000 UTC m=+0.099467026 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1766032510, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:35:04 localhost podman[90369]: 2026-02-01 08:35:04.895195061 +0000 UTC m=+0.106606417 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, com.redhat.component=openstack-cron-container, io.openshift.expose-services=) Feb 1 03:35:04 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:35:04 localhost podman[90372]: 2026-02-01 08:35:04.991375144 +0000 UTC m=+0.191192022 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, distribution-scope=public, release=1766032510, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Feb 1 03:35:05 localhost podman[90383]: 2026-02-01 08:35:04.955030521 +0000 UTC m=+0.148863944 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 1 03:35:05 localhost podman[90372]: 2026-02-01 08:35:05.027687207 +0000 UTC m=+0.227504065 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:35:05 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:35:05 localhost podman[90371]: 2026-02-01 08:35:05.043599019 +0000 UTC m=+0.244868722 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:35:05 localhost podman[90371]: 2026-02-01 08:35:05.07726463 +0000 UTC m=+0.278534303 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z) Feb 1 03:35:05 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:35:05 localhost podman[90383]: 2026-02-01 08:35:05.092937774 +0000 UTC m=+0.286771187 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public) Feb 1 03:35:05 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:35:05 localhost podman[90370]: 2026-02-01 08:35:05.094909765 +0000 UTC m=+0.302425911 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, container_name=nova_compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:35:05 localhost podman[90370]: 2026-02-01 08:35:05.176132937 +0000 UTC m=+0.383649053 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1766032510) Feb 1 03:35:05 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:35:08 localhost systemd[1]: tmp-crun.oBGu6J.mount: Deactivated successfully. Feb 1 03:35:08 localhost podman[90488]: 2026-02-01 08:35:08.879352803 +0000 UTC m=+0.095737801 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1) Feb 1 03:35:09 localhost podman[90488]: 2026-02-01 08:35:09.248066683 +0000 UTC m=+0.464451741 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, release=1766032510, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:35:09 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:35:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:35:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:35:10 localhost podman[90512]: 2026-02-01 08:35:10.882789325 +0000 UTC m=+0.084547925 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible) Feb 1 03:35:10 localhost podman[90512]: 2026-02-01 08:35:10.916620581 +0000 UTC m=+0.118379171 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, config_id=tripleo_step4, container_name=ovn_controller) Feb 1 03:35:10 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:35:10 localhost systemd[1]: tmp-crun.csxpE1.mount: Deactivated successfully. Feb 1 03:35:10 localhost podman[90511]: 2026-02-01 08:35:10.956561715 +0000 UTC m=+0.160867214 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true) Feb 1 03:35:10 localhost podman[90511]: 2026-02-01 08:35:10.993650593 +0000 UTC m=+0.197956092 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:35:11 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:35:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:35:23 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:35:23 localhost recover_tripleo_nova_virtqemud[90565]: 62016 Feb 1 03:35:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:35:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:35:23 localhost systemd[1]: tmp-crun.ZaWbrn.mount: Deactivated successfully. Feb 1 03:35:23 localhost podman[90558]: 2026-02-01 08:35:23.890337642 +0000 UTC m=+0.097710742 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., version=17.1.13, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible) Feb 1 03:35:24 localhost podman[90558]: 2026-02-01 08:35:24.102230424 +0000 UTC m=+0.309603474 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, managed_by=tripleo_ansible) Feb 1 03:35:24 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:35:33 localhost podman[90665]: 2026-02-01 08:35:33.912437664 +0000 UTC m=+0.124442989 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, tcib_managed=true, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:35:33 localhost podman[90665]: 2026-02-01 08:35:33.94594949 +0000 UTC m=+0.157954845 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, vcs-type=git, container_name=collectd, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:35:33 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:35:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:35:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:35:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:35:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:35:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:35:35 localhost systemd[1]: tmp-crun.kyow5R.mount: Deactivated successfully. Feb 1 03:35:35 localhost podman[90688]: 2026-02-01 08:35:35.852880629 +0000 UTC m=+0.070882834 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, config_id=tripleo_step3, version=17.1.13, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, batch=17.1_20260112.1) Feb 1 03:35:35 localhost podman[90689]: 2026-02-01 08:35:35.870782992 +0000 UTC m=+0.083649758 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510) Feb 1 03:35:35 localhost podman[90689]: 2026-02-01 08:35:35.916587008 +0000 UTC m=+0.129453814 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:35:35 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:35:35 localhost podman[90690]: 2026-02-01 08:35:35.928851597 +0000 UTC m=+0.136144650 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1766032510, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 1 03:35:35 localhost podman[90688]: 2026-02-01 08:35:35.935330547 +0000 UTC m=+0.153332772 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Feb 1 03:35:35 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:35:35 localhost podman[90690]: 2026-02-01 08:35:35.946267746 +0000 UTC m=+0.153560799 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13) Feb 1 03:35:35 localhost podman[90687]: 2026-02-01 08:35:35.906175546 +0000 UTC m=+0.120919410 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, tcib_managed=true, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1) Feb 1 03:35:35 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:35:36 localhost podman[90686]: 2026-02-01 08:35:36.005400394 +0000 UTC m=+0.223131400 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, version=17.1.13, architecture=x86_64, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:35:36 localhost podman[90686]: 2026-02-01 08:35:36.033564125 +0000 UTC m=+0.251295141 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, container_name=logrotate_crond, distribution-scope=public) Feb 1 03:35:36 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:35:36 localhost podman[90687]: 2026-02-01 08:35:36.041777879 +0000 UTC m=+0.256521733 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, config_id=tripleo_step5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 1 03:35:36 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:35:36 localhost systemd[1]: tmp-crun.NUXvnC.mount: Deactivated successfully. Feb 1 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:35:39 localhost podman[90802]: 2026-02-01 08:35:39.859586009 +0000 UTC m=+0.079000993 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:35:40 localhost podman[90802]: 2026-02-01 08:35:40.204616756 +0000 UTC m=+0.424031700 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 1 03:35:40 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:35:41 localhost podman[90824]: 2026-02-01 08:35:41.870234555 +0000 UTC m=+0.085682101 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.13, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Feb 1 03:35:41 localhost podman[90825]: 2026-02-01 08:35:41.922357226 +0000 UTC m=+0.135612194 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=ovn_controller, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller) Feb 1 03:35:41 localhost podman[90825]: 2026-02-01 08:35:41.945677056 +0000 UTC m=+0.158932064 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 1 03:35:41 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:35:42 localhost podman[90824]: 2026-02-01 08:35:41.999092919 +0000 UTC m=+0.214540455 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 1 03:35:42 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:35:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:35:54 localhost podman[90872]: 2026-02-01 08:35:54.876520841 +0000 UTC m=+0.091079717 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:35:55 localhost podman[90872]: 2026-02-01 08:35:55.099279718 +0000 UTC m=+0.313838574 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible) Feb 1 03:35:55 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:36:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:36:04 localhost podman[90901]: 2026-02-01 08:36:04.862759966 +0000 UTC m=+0.080652954 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 1 03:36:04 localhost podman[90901]: 2026-02-01 08:36:04.872399144 +0000 UTC m=+0.090292112 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-collectd-container, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1) Feb 1 03:36:04 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:36:06 localhost systemd[1]: tmp-crun.jihlsg.mount: Deactivated successfully. Feb 1 03:36:06 localhost podman[90926]: 2026-02-01 08:36:06.864391153 +0000 UTC m=+0.076344452 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510) Feb 1 03:36:06 localhost podman[90926]: 2026-02-01 08:36:06.872340678 +0000 UTC m=+0.084293977 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13) Feb 1 03:36:06 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:36:06 localhost podman[90922]: 2026-02-01 08:36:06.881371218 +0000 UTC m=+0.091975545 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5, container_name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, release=1766032510, io.openshift.expose-services=) Feb 1 03:36:06 localhost podman[90930]: 2026-02-01 08:36:06.933060676 +0000 UTC m=+0.135350386 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team) Feb 1 03:36:06 localhost podman[90929]: 2026-02-01 08:36:06.989426829 +0000 UTC m=+0.195663981 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, version=17.1.13, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1) Feb 1 03:36:07 localhost podman[90922]: 2026-02-01 08:36:07.006875458 +0000 UTC m=+0.217479835 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:36:07 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:36:07 localhost podman[90929]: 2026-02-01 08:36:07.042870251 +0000 UTC m=+0.249107403 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510) Feb 1 03:36:07 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:36:07 localhost podman[90921]: 2026-02-01 08:36:06.954398236 +0000 UTC m=+0.173901238 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vcs-type=git, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:36:07 localhost podman[90930]: 2026-02-01 08:36:07.062860339 +0000 UTC m=+0.265150009 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, version=17.1.13, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:36:07 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:36:07 localhost podman[90921]: 2026-02-01 08:36:07.090179824 +0000 UTC m=+0.309682786 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, release=1766032510) Feb 1 03:36:07 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:36:07 localhost systemd[1]: tmp-crun.lSVAfm.mount: Deactivated successfully. Feb 1 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:36:10 localhost podman[91032]: 2026-02-01 08:36:10.865069617 +0000 UTC m=+0.072803492 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:36:11 localhost podman[91032]: 2026-02-01 08:36:11.241132724 +0000 UTC m=+0.448866539 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_migration_target, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:36:11 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:36:12 localhost podman[91055]: 2026-02-01 08:36:12.873109 +0000 UTC m=+0.085250377 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 1 03:36:12 localhost podman[91055]: 2026-02-01 08:36:12.91967221 +0000 UTC m=+0.131813587 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:36:12 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:36:12 localhost podman[91056]: 2026-02-01 08:36:12.940790263 +0000 UTC m=+0.149403971 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:36:12 localhost podman[91056]: 2026-02-01 08:36:12.992181682 +0000 UTC m=+0.200795330 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:36:13 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:36:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:36:25 localhost podman[91104]: 2026-02-01 08:36:25.865494318 +0000 UTC m=+0.080930224 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, release=1766032510, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:36:26 localhost podman[91104]: 2026-02-01 08:36:26.061591091 +0000 UTC m=+0.277026937 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:36:26 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:36:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:36:35 localhost podman[91209]: 2026-02-01 08:36:35.859540003 +0000 UTC m=+0.075923998 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, architecture=x86_64, version=17.1.13, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com) Feb 1 03:36:35 localhost podman[91209]: 2026-02-01 08:36:35.899801857 +0000 UTC m=+0.116185862 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step3) Feb 1 03:36:35 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:36:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:36:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:36:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:36:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:36:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:36:37 localhost podman[91238]: 2026-02-01 08:36:37.87480216 +0000 UTC m=+0.075715411 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 1 03:36:37 localhost systemd[1]: tmp-crun.D51DK6.mount: Deactivated successfully. Feb 1 03:36:37 localhost podman[91238]: 2026-02-01 08:36:37.929622436 +0000 UTC m=+0.130535677 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:36:37 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:36:37 localhost podman[91230]: 2026-02-01 08:36:37.974274876 +0000 UTC m=+0.183616148 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, distribution-scope=public, vendor=Red Hat, Inc., release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:36:37 localhost podman[91229]: 2026-02-01 08:36:37.931091361 +0000 UTC m=+0.143336203 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:36:38 localhost podman[91232]: 2026-02-01 08:36:38.046449977 +0000 UTC m=+0.251242808 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, version=17.1.13, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:36:38 localhost podman[91230]: 2026-02-01 08:36:38.055651802 +0000 UTC m=+0.264993154 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:36:38 localhost podman[91229]: 2026-02-01 08:36:38.065535867 +0000 UTC m=+0.277780699 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4) Feb 1 03:36:38 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:36:38 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:36:38 localhost podman[91232]: 2026-02-01 08:36:38.078645193 +0000 UTC m=+0.283438024 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:36:38 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:36:38 localhost podman[91231]: 2026-02-01 08:36:38.136537323 +0000 UTC m=+0.343766600 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com) Feb 1 03:36:38 localhost podman[91231]: 2026-02-01 08:36:38.169199472 +0000 UTC m=+0.376428769 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Feb 1 03:36:38 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:36:41 localhost podman[91350]: 2026-02-01 08:36:41.860596992 +0000 UTC m=+0.080444228 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Feb 1 03:36:42 localhost podman[91350]: 2026-02-01 08:36:42.185855009 +0000 UTC m=+0.405702265 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:36:42 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:36:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:36:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:36:43 localhost podman[91374]: 2026-02-01 08:36:43.882495635 +0000 UTC m=+0.082797091 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:36:43 localhost systemd[1]: tmp-crun.QqzJfx.mount: Deactivated successfully. Feb 1 03:36:43 localhost podman[91373]: 2026-02-01 08:36:43.942380926 +0000 UTC m=+0.145484089 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 1 03:36:43 localhost podman[91374]: 2026-02-01 08:36:43.958466924 +0000 UTC m=+0.158768350 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step4, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:36:43 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:36:43 localhost podman[91373]: 2026-02-01 08:36:43.98872399 +0000 UTC m=+0.191827483 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, build-date=2026-01-12T22:56:19Z, vcs-type=git, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:36:44 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:36:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:36:56 localhost podman[91421]: 2026-02-01 08:36:56.863848603 +0000 UTC m=+0.070677207 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, release=1766032510, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:36:57 localhost podman[91421]: 2026-02-01 08:36:57.065765286 +0000 UTC m=+0.272593890 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1) Feb 1 03:36:57 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:37:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:37:06 localhost podman[91450]: 2026-02-01 08:37:06.893197919 +0000 UTC m=+0.106423191 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, release=1766032510, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.openshift.expose-services=) Feb 1 03:37:06 localhost podman[91450]: 2026-02-01 08:37:06.9022704 +0000 UTC m=+0.115495702 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step3, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, version=17.1.13, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:37:06 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:37:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:37:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:37:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:37:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:37:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:37:08 localhost podman[91471]: 2026-02-01 08:37:08.873494897 +0000 UTC m=+0.083325808 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:37:08 localhost podman[91471]: 2026-02-01 08:37:08.931647854 +0000 UTC m=+0.141478695 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1) Feb 1 03:37:08 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:37:08 localhost podman[91470]: 2026-02-01 08:37:08.856581964 +0000 UTC m=+0.072601226 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1) Feb 1 03:37:08 localhost podman[91475]: 2026-02-01 08:37:08.91952621 +0000 UTC m=+0.130627141 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step3, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, container_name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Feb 1 03:37:08 localhost podman[91477]: 2026-02-01 08:37:08.97449912 +0000 UTC m=+0.176817568 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, distribution-scope=public, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 1 03:37:08 localhost podman[91470]: 2026-02-01 08:37:08.989599856 +0000 UTC m=+0.205619128 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:37:08 localhost podman[91477]: 2026-02-01 08:37:08.997239673 +0000 UTC m=+0.199558151 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_id=tripleo_step4, container_name=ceilometer_agent_compute) Feb 1 03:37:08 localhost podman[91483]: 2026-02-01 08:37:08.947270687 +0000 UTC m=+0.147567103 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.5, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1766032510) Feb 1 03:37:08 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:37:09 localhost podman[91475]: 2026-02-01 08:37:09.000675159 +0000 UTC m=+0.211776080 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:37:09 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:37:09 localhost podman[91483]: 2026-02-01 08:37:09.030691097 +0000 UTC m=+0.230987473 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-12T23:07:30Z, release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:37:09 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:37:09 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:37:09 localhost systemd[1]: tmp-crun.2qulod.mount: Deactivated successfully. Feb 1 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:37:12 localhost podman[91583]: 2026-02-01 08:37:12.867512902 +0000 UTC m=+0.082901963 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container) Feb 1 03:37:13 localhost podman[91583]: 2026-02-01 08:37:13.262854616 +0000 UTC m=+0.478243687 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target) Feb 1 03:37:13 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:37:14 localhost systemd[1]: tmp-crun.at1JrH.mount: Deactivated successfully. Feb 1 03:37:14 localhost podman[91605]: 2026-02-01 08:37:14.879560222 +0000 UTC m=+0.092624265 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:37:14 localhost podman[91606]: 2026-02-01 08:37:14.931377073 +0000 UTC m=+0.141094443 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:37:14 localhost podman[91605]: 2026-02-01 08:37:14.944219611 +0000 UTC m=+0.157283684 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:37:14 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:37:14 localhost podman[91606]: 2026-02-01 08:37:14.960860505 +0000 UTC m=+0.170577915 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.5) Feb 1 03:37:14 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:37:15 localhost systemd[1]: tmp-crun.3AnrNz.mount: Deactivated successfully. Feb 1 03:37:20 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:37:20 localhost recover_tripleo_nova_virtqemud[91655]: 62016 Feb 1 03:37:20 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:37:20 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:37:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:37:27 localhost podman[91656]: 2026-02-01 08:37:27.866858253 +0000 UTC m=+0.080525751 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_id=tripleo_step1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible) Feb 1 03:37:28 localhost podman[91656]: 2026-02-01 08:37:28.069639023 +0000 UTC m=+0.283306451 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, tcib_managed=true, container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, release=1766032510) Feb 1 03:37:28 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:37:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:37:37 localhost systemd[1]: tmp-crun.mLaVJv.mount: Deactivated successfully. Feb 1 03:37:37 localhost podman[91762]: 2026-02-01 08:37:37.877247425 +0000 UTC m=+0.088860859 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, architecture=x86_64, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, distribution-scope=public) Feb 1 03:37:37 localhost podman[91762]: 2026-02-01 08:37:37.913658111 +0000 UTC m=+0.125271515 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.13, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:37:37 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:37:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:37:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:37:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:37:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:37:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:37:39 localhost systemd[1]: tmp-crun.MOz4Y2.mount: Deactivated successfully. Feb 1 03:37:39 localhost podman[91783]: 2026-02-01 08:37:39.926015548 +0000 UTC m=+0.138830123 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:37:39 localhost podman[91783]: 2026-02-01 08:37:39.952770486 +0000 UTC m=+0.165585061 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, container_name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z) Feb 1 03:37:39 localhost podman[91784]: 2026-02-01 08:37:39.90827117 +0000 UTC m=+0.118037891 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible) Feb 1 03:37:39 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:37:39 localhost podman[91791]: 2026-02-01 08:37:39.965991605 +0000 UTC m=+0.169376358 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:37:40 localhost podman[91782]: 2026-02-01 08:37:40.01436572 +0000 UTC m=+0.229654461 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, architecture=x86_64, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.buildah.version=1.41.5, version=17.1.13) Feb 1 03:37:40 localhost podman[91791]: 2026-02-01 08:37:40.065735628 +0000 UTC m=+0.269120401 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 1 03:37:40 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:37:40 localhost podman[91785]: 2026-02-01 08:37:40.082129135 +0000 UTC m=+0.289672327 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510) Feb 1 03:37:40 localhost podman[91784]: 2026-02-01 08:37:40.091423622 +0000 UTC m=+0.301190403 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step3, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., vcs-type=git) Feb 1 03:37:40 localhost podman[91782]: 2026-02-01 08:37:40.098799991 +0000 UTC m=+0.314088712 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:37:40 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:37:40 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:37:40 localhost podman[91785]: 2026-02-01 08:37:40.133818233 +0000 UTC m=+0.341361405 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z) Feb 1 03:37:40 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:37:43 localhost podman[91900]: 2026-02-01 08:37:43.866250374 +0000 UTC m=+0.084923058 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, release=1766032510, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:37:44 localhost podman[91900]: 2026-02-01 08:37:44.234076046 +0000 UTC m=+0.452748700 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:37:44 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:37:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:37:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:37:45 localhost podman[91925]: 2026-02-01 08:37:45.87403445 +0000 UTC m=+0.087954421 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible) Feb 1 03:37:45 localhost podman[91925]: 2026-02-01 08:37:45.896234776 +0000 UTC m=+0.110154807 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 1 03:37:45 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:37:45 localhost podman[91924]: 2026-02-01 08:37:45.97783665 +0000 UTC m=+0.195023641 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git) Feb 1 03:37:46 localhost podman[91924]: 2026-02-01 08:37:46.045888804 +0000 UTC m=+0.263075815 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 1 03:37:46 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:37:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:37:58 localhost systemd[1]: tmp-crun.PPkcBs.mount: Deactivated successfully. Feb 1 03:37:58 localhost podman[91971]: 2026-02-01 08:37:58.864080639 +0000 UTC m=+0.081808921 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, container_name=metrics_qdr, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:37:59 localhost podman[91971]: 2026-02-01 08:37:59.086746393 +0000 UTC m=+0.304474635 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Feb 1 03:37:59 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:38:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:38:08 localhost podman[91999]: 2026-02-01 08:38:08.873154569 +0000 UTC m=+0.086162175 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, tcib_managed=true, build-date=2026-01-12T22:10:15Z, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:38:08 localhost podman[91999]: 2026-02-01 08:38:08.906717776 +0000 UTC m=+0.119725352 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, distribution-scope=public, release=1766032510, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-type=git, container_name=collectd, config_id=tripleo_step3) Feb 1 03:38:08 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:38:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:38:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:38:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:38:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:38:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:38:10 localhost systemd[1]: tmp-crun.EDmgxT.mount: Deactivated successfully. Feb 1 03:38:10 localhost podman[92019]: 2026-02-01 08:38:10.875173507 +0000 UTC m=+0.089505349 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.13, vcs-type=git, name=rhosp-rhel9/openstack-cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:38:10 localhost podman[92029]: 2026-02-01 08:38:10.884534786 +0000 UTC m=+0.077926520 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 1 03:38:10 localhost podman[92022]: 2026-02-01 08:38:10.936709409 +0000 UTC m=+0.139527225 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, version=17.1.13, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z) Feb 1 03:38:10 localhost podman[92019]: 2026-02-01 08:38:10.965128408 +0000 UTC m=+0.179460220 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:38:10 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:38:10 localhost podman[92022]: 2026-02-01 08:38:10.97490562 +0000 UTC m=+0.177723506 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, release=1766032510, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:38:10 localhost podman[92027]: 2026-02-01 08:38:10.984371323 +0000 UTC m=+0.183677221 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:38:11 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:38:11 localhost podman[92029]: 2026-02-01 08:38:11.009885311 +0000 UTC m=+0.203277115 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git) Feb 1 03:38:11 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:38:11 localhost podman[92027]: 2026-02-01 08:38:11.041639323 +0000 UTC m=+0.240945241 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T23:07:47Z, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, version=17.1.13, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Feb 1 03:38:11 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:38:11 localhost podman[92020]: 2026-02-01 08:38:11.099697828 +0000 UTC m=+0.304143754 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.openshift.expose-services=) Feb 1 03:38:11 localhost podman[92020]: 2026-02-01 08:38:11.153008796 +0000 UTC m=+0.357454632 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Feb 1 03:38:11 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:38:11 localhost systemd[1]: tmp-crun.HCOEW6.mount: Deactivated successfully. Feb 1 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:38:14 localhost podman[92134]: 2026-02-01 08:38:14.865177778 +0000 UTC m=+0.078181048 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=nova_migration_target, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container) Feb 1 03:38:15 localhost podman[92134]: 2026-02-01 08:38:15.249974386 +0000 UTC m=+0.462977456 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 1 03:38:15 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:38:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:38:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:38:16 localhost podman[92159]: 2026-02-01 08:38:16.876733511 +0000 UTC m=+0.086288878 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:38:16 localhost podman[92159]: 2026-02-01 08:38:16.926546122 +0000 UTC m=+0.136101469 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:38:16 localhost podman[92158]: 2026-02-01 08:38:16.938390108 +0000 UTC m=+0.149831193 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public, tcib_managed=true) Feb 1 03:38:16 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Deactivated successfully. Feb 1 03:38:17 localhost podman[92158]: 2026-02-01 08:38:17.000771227 +0000 UTC m=+0.212212272 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, architecture=x86_64, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Feb 1 03:38:17 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:38:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:38:29 localhost systemd[1]: tmp-crun.aGNbFq.mount: Deactivated successfully. Feb 1 03:38:29 localhost podman[92206]: 2026-02-01 08:38:29.872157825 +0000 UTC m=+0.085735172 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:38:30 localhost podman[92206]: 2026-02-01 08:38:30.093761416 +0000 UTC m=+0.307338763 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 1 03:38:30 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:38:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:38:39 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:38:39 localhost recover_tripleo_nova_virtqemud[92318]: 62016 Feb 1 03:38:39 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:38:39 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:38:39 localhost systemd[1]: tmp-crun.Sx6kj3.mount: Deactivated successfully. Feb 1 03:38:39 localhost podman[92313]: 2026-02-01 08:38:39.871672271 +0000 UTC m=+0.086698612 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, release=1766032510, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:38:39 localhost podman[92313]: 2026-02-01 08:38:39.907671364 +0000 UTC m=+0.122697705 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, architecture=x86_64, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible) Feb 1 03:38:39 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:38:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:38:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:38:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:38:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:38:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:38:41 localhost podman[92337]: 2026-02-01 08:38:41.869550791 +0000 UTC m=+0.081659816 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, release=1766032510, container_name=nova_compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:38:41 localhost podman[92338]: 2026-02-01 08:38:41.924660995 +0000 UTC m=+0.131746114 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vcs-type=git, build-date=2026-01-12T22:34:43Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc.) Feb 1 03:38:41 localhost podman[92338]: 2026-02-01 08:38:41.939608047 +0000 UTC m=+0.146693206 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:38:41 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:38:41 localhost podman[92337]: 2026-02-01 08:38:41.977935802 +0000 UTC m=+0.190044837 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:38:41 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:38:41 localhost podman[92336]: 2026-02-01 08:38:41.990562262 +0000 UTC m=+0.202671107 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, release=1766032510) Feb 1 03:38:42 localhost podman[92339]: 2026-02-01 08:38:42.032510159 +0000 UTC m=+0.238796984 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, distribution-scope=public, release=1766032510, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc.) Feb 1 03:38:42 localhost podman[92347]: 2026-02-01 08:38:42.088076687 +0000 UTC m=+0.289835032 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:38:42 localhost podman[92336]: 2026-02-01 08:38:42.111805141 +0000 UTC m=+0.323914066 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp-rhel9/openstack-cron, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public) Feb 1 03:38:42 localhost podman[92347]: 2026-02-01 08:38:42.119627963 +0000 UTC m=+0.321386318 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, release=1766032510, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13) Feb 1 03:38:42 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:38:42 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:38:42 localhost podman[92339]: 2026-02-01 08:38:42.168220456 +0000 UTC m=+0.374507291 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, container_name=ceilometer_agent_compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 1 03:38:42 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:38:45 localhost podman[92451]: 2026-02-01 08:38:45.872814745 +0000 UTC m=+0.089215350 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T23:32:04Z) Feb 1 03:38:46 localhost podman[92451]: 2026-02-01 08:38:46.248509891 +0000 UTC m=+0.464910486 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, distribution-scope=public, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=) Feb 1 03:38:46 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:38:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:38:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:38:47 localhost podman[92474]: 2026-02-01 08:38:47.880417375 +0000 UTC m=+0.090725607 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510) Feb 1 03:38:47 localhost podman[92475]: 2026-02-01 08:38:47.934898809 +0000 UTC m=+0.143183688 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, version=17.1.13, tcib_managed=true) Feb 1 03:38:47 localhost podman[92475]: 2026-02-01 08:38:47.958502938 +0000 UTC m=+0.166787877 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com) Feb 1 03:38:47 localhost podman[92475]: unhealthy Feb 1 03:38:47 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:38:47 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:38:47 localhost podman[92474]: 2026-02-01 08:38:47.986959519 +0000 UTC m=+0.197267761 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:38:48 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:39:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:39:00 localhost systemd[1]: tmp-crun.X7uIUq.mount: Deactivated successfully. Feb 1 03:39:00 localhost podman[92525]: 2026-02-01 08:39:00.882521542 +0000 UTC m=+0.100335773 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, release=1766032510, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:39:01 localhost podman[92525]: 2026-02-01 08:39:01.067095009 +0000 UTC m=+0.284909210 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:39:01 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:39:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:39:10 localhost podman[92554]: 2026-02-01 08:39:10.858430847 +0000 UTC m=+0.077017163 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.5, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=) Feb 1 03:39:10 localhost podman[92554]: 2026-02-01 08:39:10.896585457 +0000 UTC m=+0.115171753 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T22:10:15Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:39:10 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:39:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:39:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:39:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:39:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:39:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:39:12 localhost systemd[1]: tmp-crun.2QQV8I.mount: Deactivated successfully. Feb 1 03:39:12 localhost systemd[1]: tmp-crun.FCpVQq.mount: Deactivated successfully. Feb 1 03:39:12 localhost podman[92574]: 2026-02-01 08:39:12.926138725 +0000 UTC m=+0.139796303 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Feb 1 03:39:12 localhost podman[92579]: 2026-02-01 08:39:12.901604097 +0000 UTC m=+0.103351467 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 1 03:39:12 localhost podman[92574]: 2026-02-01 08:39:12.958212768 +0000 UTC m=+0.171870306 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, release=1766032510) Feb 1 03:39:12 localhost podman[92575]: 2026-02-01 08:39:12.96642119 +0000 UTC m=+0.180348696 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, release=1766032510, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, container_name=nova_compute, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:39:12 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:39:12 localhost podman[92576]: 2026-02-01 08:39:12.97253689 +0000 UTC m=+0.180488602 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Feb 1 03:39:12 localhost podman[92576]: 2026-02-01 08:39:12.981545369 +0000 UTC m=+0.189497101 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team) Feb 1 03:39:12 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:39:13 localhost podman[92588]: 2026-02-01 08:39:13.021337339 +0000 UTC m=+0.223002317 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible) Feb 1 03:39:13 localhost podman[92588]: 2026-02-01 08:39:13.070708076 +0000 UTC m=+0.272373054 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.5, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, release=1766032510, build-date=2026-01-12T23:07:30Z) Feb 1 03:39:13 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:39:13 localhost podman[92579]: 2026-02-01 08:39:13.084734679 +0000 UTC m=+0.286482059 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T23:07:47Z, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:39:13 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:39:13 localhost podman[92575]: 2026-02-01 08:39:13.123805097 +0000 UTC m=+0.337732543 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, io.buildah.version=1.41.5, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible) Feb 1 03:39:13 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:39:16 localhost podman[92691]: 2026-02-01 08:39:16.848113736 +0000 UTC m=+0.067438256 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.expose-services=, distribution-scope=public, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1) Feb 1 03:39:17 localhost podman[92691]: 2026-02-01 08:39:17.229011503 +0000 UTC m=+0.448335963 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 1 03:39:17 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:39:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:39:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:39:18 localhost systemd[1]: tmp-crun.jPjrHv.mount: Deactivated successfully. Feb 1 03:39:18 localhost podman[92714]: 2026-02-01 08:39:18.881495835 +0000 UTC m=+0.094268676 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1) Feb 1 03:39:18 localhost podman[92714]: 2026-02-01 08:39:18.932733639 +0000 UTC m=+0.145506470 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, architecture=x86_64, release=1766032510, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 1 03:39:18 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Deactivated successfully. Feb 1 03:39:18 localhost podman[92715]: 2026-02-01 08:39:18.937837807 +0000 UTC m=+0.148260365 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:39:19 localhost podman[92715]: 2026-02-01 08:39:19.019509261 +0000 UTC m=+0.229931789 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:39:19 localhost podman[92715]: unhealthy Feb 1 03:39:19 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:39:19 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:39:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:39:31 localhost podman[92764]: 2026-02-01 08:39:31.861402267 +0000 UTC m=+0.080252681 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:39:32 localhost podman[92764]: 2026-02-01 08:39:32.10856086 +0000 UTC m=+0.327411244 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, distribution-scope=public, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:39:32 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:39:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:39:41 localhost podman[92868]: 2026-02-01 08:39:41.877554247 +0000 UTC m=+0.095358359 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:39:41 localhost podman[92868]: 2026-02-01 08:39:41.890734344 +0000 UTC m=+0.108538536 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:39:41 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:39:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:39:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:39:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:39:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:39:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:39:43 localhost podman[92890]: 2026-02-01 08:39:43.886377826 +0000 UTC m=+0.090805389 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 1 03:39:43 localhost systemd[1]: tmp-crun.i7xMxJ.mount: Deactivated successfully. Feb 1 03:39:43 localhost podman[92889]: 2026-02-01 08:39:43.934450312 +0000 UTC m=+0.142927650 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true) Feb 1 03:39:43 localhost podman[92891]: 2026-02-01 08:39:43.956676009 +0000 UTC m=+0.155881591 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 1 03:39:43 localhost podman[92889]: 2026-02-01 08:39:43.961522129 +0000 UTC m=+0.169999457 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20260112.1, container_name=nova_compute, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_id=tripleo_step5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1766032510) Feb 1 03:39:43 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:39:43 localhost podman[92890]: 2026-02-01 08:39:43.974948803 +0000 UTC m=+0.179376446 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:39:43 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:39:44 localhost podman[92891]: 2026-02-01 08:39:44.009574845 +0000 UTC m=+0.208780357 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, vcs-type=git, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:39:44 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:39:44 localhost podman[92910]: 2026-02-01 08:39:44.104783828 +0000 UTC m=+0.298602863 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:39:44 localhost podman[92888]: 2026-02-01 08:39:44.135612601 +0000 UTC m=+0.344238674 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:39:44 localhost podman[92910]: 2026-02-01 08:39:44.153247466 +0000 UTC m=+0.347066481 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4) Feb 1 03:39:44 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:39:44 localhost podman[92888]: 2026-02-01 08:39:44.167169857 +0000 UTC m=+0.375795910 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=logrotate_crond, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 1 03:39:44 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:39:47 localhost systemd[1]: tmp-crun.IrZHT9.mount: Deactivated successfully. Feb 1 03:39:47 localhost podman[93001]: 2026-02-01 08:39:47.871641311 +0000 UTC m=+0.084033439 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.13, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, managed_by=tripleo_ansible) Feb 1 03:39:48 localhost podman[93001]: 2026-02-01 08:39:48.241850768 +0000 UTC m=+0.454242876 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.13, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git) Feb 1 03:39:48 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:39:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:39:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:39:49 localhost systemd[1]: tmp-crun.XmtLre.mount: Deactivated successfully. Feb 1 03:39:49 localhost podman[93024]: 2026-02-01 08:39:49.875379704 +0000 UTC m=+0.094870704 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step4, version=17.1.13, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 1 03:39:49 localhost podman[93025]: 2026-02-01 08:39:49.922670536 +0000 UTC m=+0.139629288 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 1 03:39:49 localhost podman[93025]: 2026-02-01 08:39:49.933532161 +0000 UTC m=+0.150490933 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:39:49 localhost podman[93025]: unhealthy Feb 1 03:39:49 localhost podman[93024]: 2026-02-01 08:39:49.941148977 +0000 UTC m=+0.160639977 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent) Feb 1 03:39:49 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:39:49 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:39:49 localhost podman[93024]: unhealthy Feb 1 03:39:49 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:39:49 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:40:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:40:02 localhost podman[93063]: 2026-02-01 08:40:02.858796697 +0000 UTC m=+0.076158769 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:40:03 localhost podman[93063]: 2026-02-01 08:40:03.069945564 +0000 UTC m=+0.287307596 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:40:03 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:40:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:40:10 localhost recover_tripleo_nova_virtqemud[93095]: 62016 Feb 1 03:40:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:40:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:40:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:40:12 localhost podman[93096]: 2026-02-01 08:40:12.863140501 +0000 UTC m=+0.081455896 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:40:12 localhost podman[93096]: 2026-02-01 08:40:12.901632833 +0000 UTC m=+0.119948238 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13) Feb 1 03:40:12 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:40:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:40:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:40:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:40:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:40:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:40:14 localhost systemd[1]: tmp-crun.6n6nPQ.mount: Deactivated successfully. Feb 1 03:40:14 localhost podman[93116]: 2026-02-01 08:40:14.8835971 +0000 UTC m=+0.095096642 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z) Feb 1 03:40:14 localhost podman[93117]: 2026-02-01 08:40:14.938147514 +0000 UTC m=+0.145875968 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5) Feb 1 03:40:14 localhost podman[93119]: 2026-02-01 08:40:14.90665157 +0000 UTC m=+0.107489418 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:40:14 localhost podman[93119]: 2026-02-01 08:40:14.98637347 +0000 UTC m=+0.187211338 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, distribution-scope=public, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Feb 1 03:40:14 localhost podman[93125]: 2026-02-01 08:40:14.993364329 +0000 UTC m=+0.194472007 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20260112.1) Feb 1 03:40:14 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:40:15 localhost podman[93118]: 2026-02-01 08:40:15.030402526 +0000 UTC m=+0.234400594 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.buildah.version=1.41.5, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:40:15 localhost podman[93118]: 2026-02-01 08:40:15.037963842 +0000 UTC m=+0.241962000 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13) Feb 1 03:40:15 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:40:15 localhost podman[93116]: 2026-02-01 08:40:15.063782259 +0000 UTC m=+0.275281801 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:40:15 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:40:15 localhost podman[93125]: 2026-02-01 08:40:15.091786824 +0000 UTC m=+0.292894502 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, release=1766032510, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible) Feb 1 03:40:15 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:40:15 localhost podman[93117]: 2026-02-01 08:40:15.115034 +0000 UTC m=+0.322762464 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, tcib_managed=true, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:40:15 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:40:15 localhost systemd[1]: tmp-crun.Xhpduj.mount: Deactivated successfully. Feb 1 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:40:18 localhost podman[93233]: 2026-02-01 08:40:18.847964716 +0000 UTC m=+0.066614471 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:40:19 localhost podman[93233]: 2026-02-01 08:40:19.215748806 +0000 UTC m=+0.434398551 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc.) Feb 1 03:40:19 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:40:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:40:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:40:20 localhost systemd[1]: tmp-crun.qxzhzI.mount: Deactivated successfully. Feb 1 03:40:20 localhost podman[93258]: 2026-02-01 08:40:20.876426605 +0000 UTC m=+0.090363494 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510) Feb 1 03:40:20 localhost podman[93259]: 2026-02-01 08:40:20.919053246 +0000 UTC m=+0.129812196 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, build-date=2026-01-12T22:36:40Z, version=17.1.13, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:40:20 localhost podman[93259]: 2026-02-01 08:40:20.93261072 +0000 UTC m=+0.143369660 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, architecture=x86_64, release=1766032510) Feb 1 03:40:20 localhost podman[93259]: unhealthy Feb 1 03:40:20 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:40:20 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:40:20 localhost podman[93258]: 2026-02-01 08:40:20.969890844 +0000 UTC m=+0.183827733 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:40:20 localhost podman[93258]: unhealthy Feb 1 03:40:20 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:40:20 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:40:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:40:33 localhost podman[93298]: 2026-02-01 08:40:33.88592374 +0000 UTC m=+0.079973419 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:40:34 localhost podman[93298]: 2026-02-01 08:40:34.076605017 +0000 UTC m=+0.270654686 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, release=1766032510, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 1 03:40:34 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:40:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:40:43 localhost systemd[1]: tmp-crun.X5JWRr.mount: Deactivated successfully. Feb 1 03:40:43 localhost podman[93437]: 2026-02-01 08:40:43.885509516 +0000 UTC m=+0.099562381 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, version=17.1.13, distribution-scope=public, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:40:43 localhost podman[93437]: 2026-02-01 08:40:43.89620875 +0000 UTC m=+0.110261575 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:40:43 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:40:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:40:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:40:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:40:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:40:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:40:45 localhost systemd[1]: tmp-crun.uIHHeO.mount: Deactivated successfully. Feb 1 03:40:45 localhost podman[93483]: 2026-02-01 08:40:45.593155732 +0000 UTC m=+0.087623158 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 1 03:40:45 localhost podman[93474]: 2026-02-01 08:40:45.629355513 +0000 UTC m=+0.135461583 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_compute) Feb 1 03:40:45 localhost podman[93474]: 2026-02-01 08:40:45.686721535 +0000 UTC m=+0.192827555 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1766032510, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:40:45 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:40:45 localhost podman[93483]: 2026-02-01 08:40:45.703830949 +0000 UTC m=+0.198298295 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5) Feb 1 03:40:45 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:40:45 localhost podman[93476]: 2026-02-01 08:40:45.687475539 +0000 UTC m=+0.186067524 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, version=17.1.13, container_name=ceilometer_agent_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public) Feb 1 03:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5433 writes, 23K keys, 5433 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5433 writes, 751 syncs, 7.23 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:40:45 localhost podman[93473]: 2026-02-01 08:40:45.791071905 +0000 UTC m=+0.299465296 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond) Feb 1 03:40:45 localhost podman[93473]: 2026-02-01 08:40:45.803741091 +0000 UTC m=+0.312134542 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:40:45 localhost podman[93476]: 2026-02-01 08:40:45.818024637 +0000 UTC m=+0.316616572 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, build-date=2026-01-12T23:07:47Z, architecture=x86_64, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:40:45 localhost podman[93475]: 2026-02-01 08:40:45.841455519 +0000 UTC m=+0.342939565 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, container_name=iscsid, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid) Feb 1 03:40:45 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:40:45 localhost podman[93475]: 2026-02-01 08:40:45.875499853 +0000 UTC m=+0.376983899 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, distribution-scope=public, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git) Feb 1 03:40:45 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:40:45 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:40:49 localhost systemd[1]: tmp-crun.LAy5qt.mount: Deactivated successfully. Feb 1 03:40:49 localhost podman[93588]: 2026-02-01 08:40:49.854872608 +0000 UTC m=+0.073035803 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, architecture=x86_64, container_name=nova_migration_target, config_id=tripleo_step4, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:40:50 localhost podman[93588]: 2026-02-01 08:40:50.247563665 +0000 UTC m=+0.465726840 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true) Feb 1 03:40:50 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5223 writes, 23K keys, 5223 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5223 writes, 658 syncs, 7.94 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:40:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:40:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:40:51 localhost podman[93613]: 2026-02-01 08:40:51.861849786 +0000 UTC m=+0.079747933 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent) Feb 1 03:40:51 localhost podman[93614]: 2026-02-01 08:40:51.880764726 +0000 UTC m=+0.093076239 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, container_name=ovn_controller, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:40:51 localhost podman[93614]: 2026-02-01 08:40:51.891548563 +0000 UTC m=+0.103860096 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1766032510, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64) Feb 1 03:40:51 localhost podman[93614]: unhealthy Feb 1 03:40:51 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:40:51 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:40:51 localhost podman[93613]: 2026-02-01 08:40:51.91162504 +0000 UTC m=+0.129523117 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:40:51 localhost podman[93613]: unhealthy Feb 1 03:40:51 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:40:51 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:41:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:41:04 localhost podman[93654]: 2026-02-01 08:41:04.871766323 +0000 UTC m=+0.086807643 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, version=17.1.13) Feb 1 03:41:05 localhost podman[93654]: 2026-02-01 08:41:05.108847189 +0000 UTC m=+0.323888519 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:41:05 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:41:14 localhost systemd[1]: tmp-crun.jE53M8.mount: Deactivated successfully. Feb 1 03:41:14 localhost podman[93683]: 2026-02-01 08:41:14.874817017 +0000 UTC m=+0.086731481 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 1 03:41:14 localhost podman[93683]: 2026-02-01 08:41:14.886675598 +0000 UTC m=+0.098590032 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Feb 1 03:41:14 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:41:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:41:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:41:15 localhost podman[93702]: 2026-02-01 08:41:15.855028348 +0000 UTC m=+0.070938917 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z) Feb 1 03:41:15 localhost systemd[1]: tmp-crun.RHcz9n.mount: Deactivated successfully. Feb 1 03:41:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:41:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:41:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:41:15 localhost podman[93702]: 2026-02-01 08:41:15.91238324 +0000 UTC m=+0.128293879 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:41:15 localhost systemd[1]: tmp-crun.CkNJ70.mount: Deactivated successfully. Feb 1 03:41:15 localhost podman[93701]: 2026-02-01 08:41:15.928759732 +0000 UTC m=+0.148434588 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, version=17.1.13, io.buildah.version=1.41.5, release=1766032510) Feb 1 03:41:15 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:41:15 localhost podman[93742]: 2026-02-01 08:41:15.982765229 +0000 UTC m=+0.068580864 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, release=1766032510, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, architecture=x86_64) Feb 1 03:41:15 localhost podman[93701]: 2026-02-01 08:41:15.985532385 +0000 UTC m=+0.205207211 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:41:15 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:41:16 localhost podman[93746]: 2026-02-01 08:41:16.06699978 +0000 UTC m=+0.130494308 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T23:07:47Z) Feb 1 03:41:16 localhost podman[93746]: 2026-02-01 08:41:16.088532623 +0000 UTC m=+0.152027121 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:41:16 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:41:16 localhost podman[93742]: 2026-02-01 08:41:16.120480131 +0000 UTC m=+0.206295806 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container) Feb 1 03:41:16 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:41:16 localhost podman[93741]: 2026-02-01 08:41:16.041246266 +0000 UTC m=+0.133328747 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:41:16 localhost podman[93741]: 2026-02-01 08:41:16.171007899 +0000 UTC m=+0.263090400 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20260112.1, url=https://www.redhat.com) Feb 1 03:41:16 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:41:20 localhost systemd[1]: tmp-crun.C4WmfC.mount: Deactivated successfully. Feb 1 03:41:20 localhost podman[93818]: 2026-02-01 08:41:20.870044887 +0000 UTC m=+0.083025105 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.5, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, version=17.1.13, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:41:21 localhost podman[93818]: 2026-02-01 08:41:21.223337074 +0000 UTC m=+0.436317332 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, container_name=nova_migration_target, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_step4) Feb 1 03:41:21 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:41:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:41:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:41:22 localhost podman[93839]: 2026-02-01 08:41:22.864133431 +0000 UTC m=+0.080020420 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, batch=17.1_20260112.1, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true) Feb 1 03:41:22 localhost podman[93839]: 2026-02-01 08:41:22.883707433 +0000 UTC m=+0.099594482 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 1 03:41:22 localhost podman[93839]: unhealthy Feb 1 03:41:22 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:41:22 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:41:22 localhost podman[93840]: 2026-02-01 08:41:22.972764905 +0000 UTC m=+0.185526736 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:41:23 localhost podman[93840]: 2026-02-01 08:41:23.016814922 +0000 UTC m=+0.229576803 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=ovn_controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4) Feb 1 03:41:23 localhost podman[93840]: unhealthy Feb 1 03:41:23 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:41:23 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:41:35 localhost sshd[93877]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:41:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:41:35 localhost podman[93878]: 2026-02-01 08:41:35.880767859 +0000 UTC m=+0.088847657 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, version=17.1.13) Feb 1 03:41:36 localhost podman[93878]: 2026-02-01 08:41:36.092667098 +0000 UTC m=+0.300746926 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:41:36 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:41:45 localhost podman[93922]: 2026-02-01 08:41:45.707161003 +0000 UTC m=+0.094778252 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, container_name=collectd, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd) Feb 1 03:41:45 localhost podman[93922]: 2026-02-01 08:41:45.71667275 +0000 UTC m=+0.104290019 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13) Feb 1 03:41:45 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:41:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:41:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:41:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:41:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:41:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:41:46 localhost systemd[1]: tmp-crun.uxtUhY.mount: Deactivated successfully. Feb 1 03:41:46 localhost podman[94017]: 2026-02-01 08:41:46.721679237 +0000 UTC m=+0.083552902 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1766032510, managed_by=tripleo_ansible) Feb 1 03:41:46 localhost podman[94017]: 2026-02-01 08:41:46.808618273 +0000 UTC m=+0.170491938 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid) Feb 1 03:41:46 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:41:46 localhost podman[94032]: 2026-02-01 08:41:46.772815864 +0000 UTC m=+0.132011595 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:41:46 localhost podman[94022]: 2026-02-01 08:41:46.752340455 +0000 UTC m=+0.111789114 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:41:46 localhost podman[94032]: 2026-02-01 08:41:46.856617902 +0000 UTC m=+0.215813593 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, url=https://www.redhat.com) Feb 1 03:41:46 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:41:46 localhost podman[94022]: 2026-02-01 08:41:46.88569026 +0000 UTC m=+0.245138869 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container) Feb 1 03:41:46 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:41:46 localhost podman[94014]: 2026-02-01 08:41:46.705497271 +0000 UTC m=+0.077484472 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 1 03:41:46 localhost podman[94014]: 2026-02-01 08:41:46.936324801 +0000 UTC m=+0.308311992 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:41:46 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:41:46 localhost podman[94011]: 2026-02-01 08:41:46.979505011 +0000 UTC m=+0.350393417 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, architecture=x86_64, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:41:46 localhost podman[94011]: 2026-02-01 08:41:46.985019543 +0000 UTC m=+0.355907969 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-cron-container, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:41:46 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:41:47 localhost podman[94177]: Feb 1 03:41:47 localhost podman[94177]: 2026-02-01 08:41:47.282741014 +0000 UTC m=+0.071355051 container create a99ccce78b246bc26f1d4f4c59557a761a541ce8ada40c57a8ef8a7b8e181942 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_proskuriakova, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, version=7, maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, release=1764794109, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 03:41:47 localhost systemd[1]: Started libpod-conmon-a99ccce78b246bc26f1d4f4c59557a761a541ce8ada40c57a8ef8a7b8e181942.scope. Feb 1 03:41:47 localhost systemd[1]: Started libcrun container. Feb 1 03:41:47 localhost podman[94177]: 2026-02-01 08:41:47.25509553 +0000 UTC m=+0.043709597 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 03:41:47 localhost podman[94177]: 2026-02-01 08:41:47.354985241 +0000 UTC m=+0.143599278 container init a99ccce78b246bc26f1d4f4c59557a761a541ce8ada40c57a8ef8a7b8e181942 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_proskuriakova, ceph=True, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, maintainer=Guillaume Abrioux , version=7, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:41:47 localhost podman[94177]: 2026-02-01 08:41:47.363695443 +0000 UTC m=+0.152309460 container start a99ccce78b246bc26f1d4f4c59557a761a541ce8ada40c57a8ef8a7b8e181942 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_proskuriakova, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, architecture=x86_64, ceph=True, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, release=1764794109, io.buildah.version=1.41.4, version=7, CEPH_POINT_RELEASE=) Feb 1 03:41:47 localhost podman[94177]: 2026-02-01 08:41:47.363872658 +0000 UTC m=+0.152486705 container attach a99ccce78b246bc26f1d4f4c59557a761a541ce8ada40c57a8ef8a7b8e181942 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_proskuriakova, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-12-08T17:28:53Z, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, maintainer=Guillaume Abrioux ) Feb 1 03:41:47 localhost kind_proskuriakova[94192]: 167 167 Feb 1 03:41:47 localhost systemd[1]: libpod-a99ccce78b246bc26f1d4f4c59557a761a541ce8ada40c57a8ef8a7b8e181942.scope: Deactivated successfully. Feb 1 03:41:47 localhost podman[94177]: 2026-02-01 08:41:47.368651908 +0000 UTC m=+0.157265975 container died a99ccce78b246bc26f1d4f4c59557a761a541ce8ada40c57a8ef8a7b8e181942 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_proskuriakova, maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 03:41:47 localhost podman[94198]: 2026-02-01 08:41:47.462332984 +0000 UTC m=+0.080681272 container remove a99ccce78b246bc26f1d4f4c59557a761a541ce8ada40c57a8ef8a7b8e181942 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_proskuriakova, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_CLEAN=True, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7) Feb 1 03:41:47 localhost systemd[1]: libpod-conmon-a99ccce78b246bc26f1d4f4c59557a761a541ce8ada40c57a8ef8a7b8e181942.scope: Deactivated successfully. Feb 1 03:41:47 localhost podman[94221]: Feb 1 03:41:47 localhost podman[94221]: 2026-02-01 08:41:47.660524716 +0000 UTC m=+0.076349386 container create eaa9ed93ad359bd12a9b2a7ee2014bc26cca34fa0c383688db1c86b5649c2447 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_buck, vcs-type=git, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, release=1764794109, io.openshift.expose-services=) Feb 1 03:41:47 localhost systemd[1]: Started libpod-conmon-eaa9ed93ad359bd12a9b2a7ee2014bc26cca34fa0c383688db1c86b5649c2447.scope. Feb 1 03:41:47 localhost systemd[1]: Started libcrun container. Feb 1 03:41:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ac4e8be7df32fbfc4d681407780e12093dcf7eec7d7f9a44d56f550b7855f/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 03:41:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ac4e8be7df32fbfc4d681407780e12093dcf7eec7d7f9a44d56f550b7855f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 03:41:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ac4e8be7df32fbfc4d681407780e12093dcf7eec7d7f9a44d56f550b7855f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 03:41:47 localhost podman[94221]: 2026-02-01 08:41:47.720969623 +0000 UTC m=+0.136794293 container init eaa9ed93ad359bd12a9b2a7ee2014bc26cca34fa0c383688db1c86b5649c2447 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_buck, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1764794109, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 03:41:47 localhost podman[94221]: 2026-02-01 08:41:47.629733824 +0000 UTC m=+0.045558524 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 03:41:47 localhost podman[94221]: 2026-02-01 08:41:47.733100923 +0000 UTC m=+0.148925583 container start eaa9ed93ad359bd12a9b2a7ee2014bc26cca34fa0c383688db1c86b5649c2447 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_buck, release=1764794109, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 03:41:47 localhost podman[94221]: 2026-02-01 08:41:47.733542907 +0000 UTC m=+0.149367617 container attach eaa9ed93ad359bd12a9b2a7ee2014bc26cca34fa0c383688db1c86b5649c2447 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_buck, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, ceph=True, release=1764794109, build-date=2025-12-08T17:28:53Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_CLEAN=True, version=7) Feb 1 03:41:48 localhost festive_buck[94235]: [ Feb 1 03:41:48 localhost festive_buck[94235]: { Feb 1 03:41:48 localhost festive_buck[94235]: "available": false, Feb 1 03:41:48 localhost festive_buck[94235]: "ceph_device": false, Feb 1 03:41:48 localhost festive_buck[94235]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 1 03:41:48 localhost festive_buck[94235]: "lsm_data": {}, Feb 1 03:41:48 localhost festive_buck[94235]: "lvs": [], Feb 1 03:41:48 localhost festive_buck[94235]: "path": "/dev/sr0", Feb 1 03:41:48 localhost festive_buck[94235]: "rejected_reasons": [ Feb 1 03:41:48 localhost festive_buck[94235]: "Has a FileSystem", Feb 1 03:41:48 localhost festive_buck[94235]: "Insufficient space (<5GB)" Feb 1 03:41:48 localhost festive_buck[94235]: ], Feb 1 03:41:48 localhost festive_buck[94235]: "sys_api": { Feb 1 03:41:48 localhost festive_buck[94235]: "actuators": null, Feb 1 03:41:48 localhost festive_buck[94235]: "device_nodes": "sr0", Feb 1 03:41:48 localhost festive_buck[94235]: "human_readable_size": "482.00 KB", Feb 1 03:41:48 localhost festive_buck[94235]: "id_bus": "ata", Feb 1 03:41:48 localhost festive_buck[94235]: "model": "QEMU DVD-ROM", Feb 1 03:41:48 localhost festive_buck[94235]: "nr_requests": "2", Feb 1 03:41:48 localhost festive_buck[94235]: "partitions": {}, Feb 1 03:41:48 localhost festive_buck[94235]: "path": "/dev/sr0", Feb 1 03:41:48 localhost festive_buck[94235]: "removable": "1", Feb 1 03:41:48 localhost festive_buck[94235]: "rev": "2.5+", Feb 1 03:41:48 localhost festive_buck[94235]: "ro": "0", Feb 1 03:41:48 localhost festive_buck[94235]: "rotational": "1", Feb 1 03:41:48 localhost festive_buck[94235]: "sas_address": "", Feb 1 03:41:48 localhost festive_buck[94235]: "sas_device_handle": "", Feb 1 03:41:48 localhost festive_buck[94235]: "scheduler_mode": "mq-deadline", Feb 1 03:41:48 localhost festive_buck[94235]: "sectors": 0, Feb 1 03:41:48 localhost festive_buck[94235]: "sectorsize": "2048", Feb 1 03:41:48 localhost festive_buck[94235]: "size": 493568.0, Feb 1 03:41:48 localhost festive_buck[94235]: "support_discard": "0", Feb 1 03:41:48 localhost festive_buck[94235]: "type": "disk", Feb 1 03:41:48 localhost festive_buck[94235]: "vendor": "QEMU" Feb 1 03:41:48 localhost festive_buck[94235]: } Feb 1 03:41:48 localhost festive_buck[94235]: } Feb 1 03:41:48 localhost festive_buck[94235]: ] Feb 1 03:41:48 localhost systemd[1]: libpod-eaa9ed93ad359bd12a9b2a7ee2014bc26cca34fa0c383688db1c86b5649c2447.scope: Deactivated successfully. Feb 1 03:41:48 localhost podman[94221]: 2026-02-01 08:41:48.611666809 +0000 UTC m=+1.027491479 container died eaa9ed93ad359bd12a9b2a7ee2014bc26cca34fa0c383688db1c86b5649c2447 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_buck, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, release=1764794109, ceph=True, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 03:41:48 localhost podman[96012]: 2026-02-01 08:41:48.677928339 +0000 UTC m=+0.060809361 container remove eaa9ed93ad359bd12a9b2a7ee2014bc26cca34fa0c383688db1c86b5649c2447 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_buck, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=) Feb 1 03:41:48 localhost systemd[1]: libpod-conmon-eaa9ed93ad359bd12a9b2a7ee2014bc26cca34fa0c383688db1c86b5649c2447.scope: Deactivated successfully. Feb 1 03:41:48 localhost systemd[1]: var-lib-containers-storage-overlay-975ac4e8be7df32fbfc4d681407780e12093dcf7eec7d7f9a44d56f550b7855f-merged.mount: Deactivated successfully. Feb 1 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:41:51 localhost systemd[1]: tmp-crun.jICVzs.mount: Deactivated successfully. Feb 1 03:41:51 localhost podman[96041]: 2026-02-01 08:41:51.875739527 +0000 UTC m=+0.086210133 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:41:52 localhost podman[96041]: 2026-02-01 08:41:52.24870874 +0000 UTC m=+0.459179346 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, batch=17.1_20260112.1, architecture=x86_64, container_name=nova_migration_target, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.13) Feb 1 03:41:52 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:41:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:41:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:41:53 localhost podman[96065]: 2026-02-01 08:41:53.854221945 +0000 UTC m=+0.067220971 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, tcib_managed=true, release=1766032510, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:41:53 localhost podman[96065]: 2026-02-01 08:41:53.866719695 +0000 UTC m=+0.079718691 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com) Feb 1 03:41:53 localhost podman[96065]: unhealthy Feb 1 03:41:53 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:41:53 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:41:53 localhost podman[96064]: 2026-02-01 08:41:53.899064716 +0000 UTC m=+0.112043281 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:41:53 localhost podman[96064]: 2026-02-01 08:41:53.909598025 +0000 UTC m=+0.122576590 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:41:53 localhost podman[96064]: unhealthy Feb 1 03:41:53 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:41:53 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:42:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:42:00 localhost recover_tripleo_nova_virtqemud[96106]: 62016 Feb 1 03:42:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:42:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:42:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:42:06 localhost podman[96107]: 2026-02-01 08:42:06.865182555 +0000 UTC m=+0.082356134 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_id=tripleo_step1, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:42:07 localhost podman[96107]: 2026-02-01 08:42:07.08590983 +0000 UTC m=+0.303083369 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, version=17.1.13, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 1 03:42:07 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:42:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:42:15 localhost podman[96136]: 2026-02-01 08:42:15.86572596 +0000 UTC m=+0.084161980 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:42:15 localhost podman[96136]: 2026-02-01 08:42:15.880800221 +0000 UTC m=+0.099236231 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, distribution-scope=public, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 1 03:42:15 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:42:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:42:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:42:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:42:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:42:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:42:17 localhost systemd[1]: tmp-crun.gOrlFY.mount: Deactivated successfully. Feb 1 03:42:17 localhost podman[96165]: 2026-02-01 08:42:17.876371273 +0000 UTC m=+0.078823364 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 1 03:42:17 localhost podman[96158]: 2026-02-01 08:42:17.88845139 +0000 UTC m=+0.094647938 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:42:17 localhost podman[96165]: 2026-02-01 08:42:17.90222074 +0000 UTC m=+0.104672811 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1766032510, config_id=tripleo_step4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:42:17 localhost podman[96158]: 2026-02-01 08:42:17.92461923 +0000 UTC m=+0.130815788 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible) Feb 1 03:42:17 localhost podman[96157]: 2026-02-01 08:42:17.938786082 +0000 UTC m=+0.145801056 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, config_id=tripleo_step5, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13) Feb 1 03:42:17 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:42:17 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:42:17 localhost podman[96160]: 2026-02-01 08:42:17.998779246 +0000 UTC m=+0.199721470 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git) Feb 1 03:42:18 localhost podman[96157]: 2026-02-01 08:42:18.021668272 +0000 UTC m=+0.228683216 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20260112.1, version=17.1.13) Feb 1 03:42:18 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:42:18 localhost podman[96160]: 2026-02-01 08:42:18.059857504 +0000 UTC m=+0.260799698 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git) Feb 1 03:42:18 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:42:18 localhost podman[96156]: 2026-02-01 08:42:18.103845429 +0000 UTC m=+0.316475558 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=) Feb 1 03:42:18 localhost podman[96156]: 2026-02-01 08:42:18.11091935 +0000 UTC m=+0.323549519 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, container_name=logrotate_crond) Feb 1 03:42:18 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:42:22 localhost systemd[1]: tmp-crun.p5pZV2.mount: Deactivated successfully. Feb 1 03:42:22 localhost podman[96274]: 2026-02-01 08:42:22.862361464 +0000 UTC m=+0.081412594 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:42:23 localhost podman[96274]: 2026-02-01 08:42:23.227039917 +0000 UTC m=+0.446091037 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:42:23 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:42:24 localhost podman[96299]: 2026-02-01 08:42:24.870084315 +0000 UTC m=+0.081071314 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.openshift.expose-services=) Feb 1 03:42:24 localhost podman[96299]: 2026-02-01 08:42:24.889751269 +0000 UTC m=+0.100738298 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, maintainer=OpenStack TripleO Team) Feb 1 03:42:24 localhost podman[96299]: unhealthy Feb 1 03:42:24 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:42:24 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:42:24 localhost podman[96298]: 2026-02-01 08:42:24.97713071 +0000 UTC m=+0.188765969 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1766032510, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:42:24 localhost podman[96298]: 2026-02-01 08:42:24.995764071 +0000 UTC m=+0.207399330 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, distribution-scope=public, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-type=git, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5) Feb 1 03:42:25 localhost podman[96298]: unhealthy Feb 1 03:42:25 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:42:25 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:42:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:42:37 localhost podman[96335]: 2026-02-01 08:42:37.87510009 +0000 UTC m=+0.090824679 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 1 03:42:38 localhost podman[96335]: 2026-02-01 08:42:38.07485503 +0000 UTC m=+0.290579619 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:42:38 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:42:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:42:46 localhost podman[96365]: 2026-02-01 08:42:46.866954164 +0000 UTC m=+0.080901378 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, architecture=x86_64, vcs-type=git, release=1766032510, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, batch=17.1_20260112.1, tcib_managed=true) Feb 1 03:42:46 localhost podman[96365]: 2026-02-01 08:42:46.879817736 +0000 UTC m=+0.093764960 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, tcib_managed=true, container_name=collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:42:46 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:42:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:42:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:42:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:42:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:42:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:42:48 localhost systemd[1]: tmp-crun.yKkTFu.mount: Deactivated successfully. Feb 1 03:42:48 localhost systemd[1]: tmp-crun.wMp4Qf.mount: Deactivated successfully. Feb 1 03:42:48 localhost podman[96388]: 2026-02-01 08:42:48.883574382 +0000 UTC m=+0.084904283 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:42:48 localhost podman[96394]: 2026-02-01 08:42:48.951776584 +0000 UTC m=+0.146434707 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T23:07:30Z, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, architecture=x86_64) Feb 1 03:42:48 localhost podman[96388]: 2026-02-01 08:42:48.968797845 +0000 UTC m=+0.170127766 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4, version=17.1.13, tcib_managed=true, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:42:48 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:42:48 localhost podman[96385]: 2026-02-01 08:42:48.986372594 +0000 UTC m=+0.196163149 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, distribution-scope=public, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true) Feb 1 03:42:49 localhost podman[96385]: 2026-02-01 08:42:49.023779682 +0000 UTC m=+0.233570237 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., container_name=logrotate_crond, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:42:49 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:42:49 localhost podman[96386]: 2026-02-01 08:42:49.050649202 +0000 UTC m=+0.258553808 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public) Feb 1 03:42:49 localhost podman[96394]: 2026-02-01 08:42:49.061717938 +0000 UTC m=+0.256376081 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:42:49 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:42:49 localhost podman[96386]: 2026-02-01 08:42:49.084694396 +0000 UTC m=+0.292599082 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, version=17.1.13, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team) Feb 1 03:42:49 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:42:49 localhost podman[96387]: 2026-02-01 08:42:48.91356576 +0000 UTC m=+0.117102810 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:42:49 localhost podman[96387]: 2026-02-01 08:42:49.143812432 +0000 UTC m=+0.347349572 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:42:49 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:42:53 localhost podman[96629]: 2026-02-01 08:42:53.962407484 +0000 UTC m=+0.084359847 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, architecture=x86_64, release=1766032510, vcs-type=git) Feb 1 03:42:54 localhost podman[96629]: 2026-02-01 08:42:54.328740328 +0000 UTC m=+0.450692671 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com) Feb 1 03:42:54 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:42:55 localhost podman[96653]: 2026-02-01 08:42:55.870253714 +0000 UTC m=+0.083324343 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:42:55 localhost podman[96654]: 2026-02-01 08:42:55.919037929 +0000 UTC m=+0.128552297 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Feb 1 03:42:55 localhost podman[96653]: 2026-02-01 08:42:55.938572769 +0000 UTC m=+0.151643428 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, container_name=ovn_metadata_agent, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:42:55 localhost podman[96653]: unhealthy Feb 1 03:42:55 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:42:55 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:42:55 localhost podman[96654]: 2026-02-01 08:42:55.95976043 +0000 UTC m=+0.169274718 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1) Feb 1 03:42:55 localhost podman[96654]: unhealthy Feb 1 03:42:55 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:42:55 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:43:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:43:08 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:43:08 localhost recover_tripleo_nova_virtqemud[96700]: 62016 Feb 1 03:43:08 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:43:08 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:43:08 localhost podman[96693]: 2026-02-01 08:43:08.874562566 +0000 UTC m=+0.089790816 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, vcs-type=git, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:43:09 localhost podman[96693]: 2026-02-01 08:43:09.09876337 +0000 UTC m=+0.313991550 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team) Feb 1 03:43:09 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:43:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:43:17 localhost podman[96724]: 2026-02-01 08:43:17.866513073 +0000 UTC m=+0.082305332 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:43:17 localhost podman[96724]: 2026-02-01 08:43:17.881686718 +0000 UTC m=+0.097478947 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20260112.1, release=1766032510) Feb 1 03:43:17 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:43:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:43:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:43:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:43:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:43:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:43:19 localhost systemd[1]: tmp-crun.MPmNJh.mount: Deactivated successfully. Feb 1 03:43:19 localhost podman[96744]: 2026-02-01 08:43:19.881869353 +0000 UTC m=+0.098967343 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 1 03:43:19 localhost podman[96744]: 2026-02-01 08:43:19.911737185 +0000 UTC m=+0.128835145 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 1 03:43:19 localhost podman[96745]: 2026-02-01 08:43:19.923398209 +0000 UTC m=+0.137774784 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:43:19 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:43:19 localhost podman[96746]: 2026-02-01 08:43:19.980308728 +0000 UTC m=+0.190936815 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:43:19 localhost podman[96746]: 2026-02-01 08:43:19.988448272 +0000 UTC m=+0.199076359 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13) Feb 1 03:43:19 localhost podman[96745]: 2026-02-01 08:43:19.997920948 +0000 UTC m=+0.212297583 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 1 03:43:20 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:43:20 localhost podman[96747]: 2026-02-01 08:43:20.046552657 +0000 UTC m=+0.252291223 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:43:20 localhost podman[96753]: 2026-02-01 08:43:19.899416231 +0000 UTC m=+0.102061040 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, release=1766032510) Feb 1 03:43:20 localhost podman[96753]: 2026-02-01 08:43:20.086526686 +0000 UTC m=+0.289171455 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team) Feb 1 03:43:20 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:43:20 localhost podman[96747]: 2026-02-01 08:43:20.107590124 +0000 UTC m=+0.313328660 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 1 03:43:20 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:43:20 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:43:24 localhost podman[96861]: 2026-02-01 08:43:24.873454089 +0000 UTC m=+0.083921563 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Feb 1 03:43:25 localhost podman[96861]: 2026-02-01 08:43:25.265694772 +0000 UTC m=+0.476162286 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 1 03:43:25 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:43:26 localhost podman[96883]: 2026-02-01 08:43:26.870476375 +0000 UTC m=+0.084015685 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, vcs-type=git, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, container_name=ovn_metadata_agent) Feb 1 03:43:26 localhost podman[96883]: 2026-02-01 08:43:26.890731478 +0000 UTC m=+0.104270788 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.5, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Feb 1 03:43:26 localhost podman[96883]: unhealthy Feb 1 03:43:26 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:43:26 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:43:26 localhost podman[96884]: 2026-02-01 08:43:26.976803107 +0000 UTC m=+0.187863820 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.5, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team) Feb 1 03:43:27 localhost podman[96884]: 2026-02-01 08:43:27.020013947 +0000 UTC m=+0.231074630 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:43:27 localhost podman[96884]: unhealthy Feb 1 03:43:27 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:43:27 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:43:39 localhost systemd[1]: tmp-crun.wr5lnu.mount: Deactivated successfully. Feb 1 03:43:39 localhost podman[96924]: 2026-02-01 08:43:39.87613745 +0000 UTC m=+0.092151780 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1766032510, version=17.1.13) Feb 1 03:43:40 localhost podman[96924]: 2026-02-01 08:43:40.069632735 +0000 UTC m=+0.285647075 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64) Feb 1 03:43:40 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:43:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:43:48 localhost podman[96953]: 2026-02-01 08:43:48.870606448 +0000 UTC m=+0.085247545 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, config_id=tripleo_step3, tcib_managed=true) Feb 1 03:43:48 localhost podman[96953]: 2026-02-01 08:43:48.904383463 +0000 UTC m=+0.119024570 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step3, tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, vcs-type=git, url=https://www.redhat.com) Feb 1 03:43:48 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:43:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:43:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:43:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:43:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:43:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:43:50 localhost systemd[1]: tmp-crun.GySAk2.mount: Deactivated successfully. Feb 1 03:43:50 localhost podman[96975]: 2026-02-01 08:43:50.891312954 +0000 UTC m=+0.096622790 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Feb 1 03:43:50 localhost podman[96975]: 2026-02-01 08:43:50.925518052 +0000 UTC m=+0.130827818 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, batch=17.1_20260112.1, url=https://www.redhat.com) Feb 1 03:43:50 localhost podman[96976]: 2026-02-01 08:43:50.934549364 +0000 UTC m=+0.138769225 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com) Feb 1 03:43:50 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:43:50 localhost podman[96974]: 2026-02-01 08:43:50.983636018 +0000 UTC m=+0.193717792 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, build-date=2026-01-12T23:32:04Z) Feb 1 03:43:50 localhost podman[96976]: 2026-02-01 08:43:50.988778299 +0000 UTC m=+0.192998190 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, release=1766032510, vcs-type=git, version=17.1.13, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:43:51 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:43:51 localhost podman[96974]: 2026-02-01 08:43:51.036813919 +0000 UTC m=+0.246895723 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, release=1766032510, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:43:51 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:43:51 localhost podman[96984]: 2026-02-01 08:43:50.91167724 +0000 UTC m=+0.109468301 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:43:51 localhost podman[96973]: 2026-02-01 08:43:51.041042621 +0000 UTC m=+0.252703925 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:43:51 localhost podman[96973]: 2026-02-01 08:43:51.120625778 +0000 UTC m=+0.332287152 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, release=1766032510, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond) Feb 1 03:43:51 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:43:51 localhost podman[96984]: 2026-02-01 08:43:51.144913326 +0000 UTC m=+0.342704417 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, url=https://www.redhat.com, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:43:51 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:43:55 localhost podman[97170]: 2026-02-01 08:43:55.862191183 +0000 UTC m=+0.078913165 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, version=17.1.13, release=1766032510, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=) Feb 1 03:43:56 localhost podman[97170]: 2026-02-01 08:43:56.259844416 +0000 UTC m=+0.476566428 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, version=17.1.13, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 1 03:43:56 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:43:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:43:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:43:57 localhost podman[97196]: 2026-02-01 08:43:57.862270235 +0000 UTC m=+0.080404272 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ovn_controller, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.13, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z) Feb 1 03:43:57 localhost podman[97196]: 2026-02-01 08:43:57.907749106 +0000 UTC m=+0.125883133 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:43:57 localhost systemd[1]: tmp-crun.Gn1MZu.mount: Deactivated successfully. Feb 1 03:43:57 localhost podman[97196]: unhealthy Feb 1 03:43:57 localhost podman[97195]: 2026-02-01 08:43:57.925314835 +0000 UTC m=+0.144820995 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.openshift.expose-services=) Feb 1 03:43:57 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:43:57 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:43:57 localhost podman[97195]: 2026-02-01 08:43:57.968742381 +0000 UTC m=+0.188248521 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 1 03:43:57 localhost podman[97195]: unhealthy Feb 1 03:43:57 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:43:57 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:44:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:44:10 localhost podman[97235]: 2026-02-01 08:44:10.871069348 +0000 UTC m=+0.086723270 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:44:11 localhost podman[97235]: 2026-02-01 08:44:11.08878119 +0000 UTC m=+0.304435092 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, io.openshift.expose-services=) Feb 1 03:44:11 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:44:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:44:19 localhost systemd[1]: tmp-crun.U5tYOR.mount: Deactivated successfully. Feb 1 03:44:19 localhost podman[97263]: 2026-02-01 08:44:19.880646085 +0000 UTC m=+0.094473632 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:44:19 localhost podman[97263]: 2026-02-01 08:44:19.916801955 +0000 UTC m=+0.130629462 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, version=17.1.13, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 1 03:44:19 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:44:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:44:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:44:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:44:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:44:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:44:21 localhost systemd[1]: tmp-crun.S8yrxP.mount: Deactivated successfully. Feb 1 03:44:21 localhost podman[97285]: 2026-02-01 08:44:21.903263482 +0000 UTC m=+0.119309129 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=nova_compute, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, distribution-scope=public, tcib_managed=true) Feb 1 03:44:21 localhost podman[97286]: 2026-02-01 08:44:21.866426551 +0000 UTC m=+0.080897038 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, container_name=iscsid, vcs-type=git, tcib_managed=true, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:44:21 localhost podman[97298]: 2026-02-01 08:44:21.92656893 +0000 UTC m=+0.132466509 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:44:21 localhost podman[97285]: 2026-02-01 08:44:21.96275819 +0000 UTC m=+0.178803847 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, container_name=nova_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:44:21 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:44:21 localhost podman[97298]: 2026-02-01 08:44:21.977433948 +0000 UTC m=+0.183331487 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, url=https://www.redhat.com, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:44:21 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:44:22 localhost podman[97286]: 2026-02-01 08:44:21.999940543 +0000 UTC m=+0.214411020 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, container_name=iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Feb 1 03:44:22 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:44:22 localhost podman[97284]: 2026-02-01 08:44:21.970793471 +0000 UTC m=+0.188217380 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, release=1766032510, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron) Feb 1 03:44:22 localhost podman[97284]: 2026-02-01 08:44:22.051343318 +0000 UTC m=+0.268767267 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, distribution-scope=public, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:44:22 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:44:22 localhost podman[97287]: 2026-02-01 08:44:22.136674703 +0000 UTC m=+0.342300514 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container) Feb 1 03:44:22 localhost podman[97287]: 2026-02-01 08:44:22.170660455 +0000 UTC m=+0.376286296 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5) Feb 1 03:44:22 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:44:22 localhost systemd[1]: tmp-crun.oMel2h.mount: Deactivated successfully. Feb 1 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:44:26 localhost systemd[1]: tmp-crun.ocmfza.mount: Deactivated successfully. Feb 1 03:44:26 localhost podman[97399]: 2026-02-01 08:44:26.865574223 +0000 UTC m=+0.081625260 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, version=17.1.13) Feb 1 03:44:27 localhost podman[97399]: 2026-02-01 08:44:27.253497893 +0000 UTC m=+0.469549000 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:44:27 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:44:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:44:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:44:28 localhost podman[97421]: 2026-02-01 08:44:28.850183813 +0000 UTC m=+0.064457815 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=) Feb 1 03:44:28 localhost podman[97421]: 2026-02-01 08:44:28.86607812 +0000 UTC m=+0.080352132 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13) Feb 1 03:44:28 localhost podman[97421]: unhealthy Feb 1 03:44:28 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:44:28 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:44:28 localhost systemd[1]: tmp-crun.ZkF78R.mount: Deactivated successfully. Feb 1 03:44:28 localhost podman[97422]: 2026-02-01 08:44:28.920553851 +0000 UTC m=+0.132854041 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:44:28 localhost podman[97422]: 2026-02-01 08:44:28.955373328 +0000 UTC m=+0.167673478 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, version=17.1.13, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc.) Feb 1 03:44:28 localhost podman[97422]: unhealthy Feb 1 03:44:28 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:44:28 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:44:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:44:41 localhost podman[97459]: 2026-02-01 08:44:41.86879646 +0000 UTC m=+0.081393363 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step1, release=1766032510, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 1 03:44:42 localhost podman[97459]: 2026-02-01 08:44:42.101954254 +0000 UTC m=+0.314551197 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team) Feb 1 03:44:42 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:44:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:44:50 localhost podman[97488]: 2026-02-01 08:44:50.882756545 +0000 UTC m=+0.098816979 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-collectd-container) Feb 1 03:44:50 localhost podman[97488]: 2026-02-01 08:44:50.911250225 +0000 UTC m=+0.127310629 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:44:50 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:44:52 localhost podman[97519]: 2026-02-01 08:44:52.852366755 +0000 UTC m=+0.064596049 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 1 03:44:52 localhost systemd[1]: tmp-crun.zlUhAA.mount: Deactivated successfully. Feb 1 03:44:52 localhost podman[97510]: 2026-02-01 08:44:52.904220084 +0000 UTC m=+0.118597806 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.5, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid) Feb 1 03:44:52 localhost podman[97508]: 2026-02-01 08:44:52.956230969 +0000 UTC m=+0.174575484 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 1 03:44:52 localhost podman[97508]: 2026-02-01 08:44:52.964076785 +0000 UTC m=+0.182421340 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, release=1766032510, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 1 03:44:52 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:44:52 localhost podman[97519]: 2026-02-01 08:44:52.98027745 +0000 UTC m=+0.192506754 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:44:52 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:44:53 localhost podman[97525]: 2026-02-01 08:44:53.06187416 +0000 UTC m=+0.269876543 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, tcib_managed=true, io.openshift.expose-services=) Feb 1 03:44:53 localhost podman[97510]: 2026-02-01 08:44:53.087058037 +0000 UTC m=+0.301435779 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container) Feb 1 03:44:53 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:44:53 localhost podman[97509]: 2026-02-01 08:44:53.102009463 +0000 UTC m=+0.320506734 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:44:53 localhost podman[97525]: 2026-02-01 08:44:53.111099227 +0000 UTC m=+0.319101560 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, url=https://www.redhat.com, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true) Feb 1 03:44:53 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:44:53 localhost podman[97509]: 2026-02-01 08:44:53.128643085 +0000 UTC m=+0.347140376 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510) Feb 1 03:44:53 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:44:54 localhost systemd[1]: tmp-crun.y3oUci.mount: Deactivated successfully. Feb 1 03:44:54 localhost podman[97727]: 2026-02-01 08:44:54.302233268 +0000 UTC m=+0.078728731 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 03:44:54 localhost podman[97727]: 2026-02-01 08:44:54.399084884 +0000 UTC m=+0.175580387 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, release=1764794109, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, version=7) Feb 1 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:44:57 localhost podman[97872]: 2026-02-01 08:44:57.875224438 +0000 UTC m=+0.082384935 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com) Feb 1 03:44:58 localhost podman[97872]: 2026-02-01 08:44:58.252275696 +0000 UTC m=+0.459436183 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:44:58 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:44:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:44:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:44:59 localhost podman[97897]: 2026-02-01 08:44:59.878841001 +0000 UTC m=+0.087840105 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 1 03:44:59 localhost podman[97897]: 2026-02-01 08:44:59.919547832 +0000 UTC m=+0.128546906 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64) Feb 1 03:44:59 localhost podman[97897]: unhealthy Feb 1 03:44:59 localhost podman[97896]: 2026-02-01 08:44:59.929963137 +0000 UTC m=+0.140981274 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team) Feb 1 03:44:59 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:44:59 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:44:59 localhost podman[97896]: 2026-02-01 08:44:59.946787053 +0000 UTC m=+0.157805190 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 1 03:44:59 localhost podman[97896]: unhealthy Feb 1 03:44:59 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:44:59 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:45:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:45:10 localhost recover_tripleo_nova_virtqemud[97937]: 62016 Feb 1 03:45:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:45:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:45:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:45:12 localhost systemd[1]: tmp-crun.cFEbZb.mount: Deactivated successfully. Feb 1 03:45:12 localhost podman[97938]: 2026-02-01 08:45:12.860008461 +0000 UTC m=+0.077362348 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, version=17.1.13, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, distribution-scope=public) Feb 1 03:45:13 localhost podman[97938]: 2026-02-01 08:45:13.048179789 +0000 UTC m=+0.265533606 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 1 03:45:13 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:45:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:45:21 localhost podman[97967]: 2026-02-01 08:45:21.865840563 +0000 UTC m=+0.082142837 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, tcib_managed=true, distribution-scope=public, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git) Feb 1 03:45:21 localhost podman[97967]: 2026-02-01 08:45:21.877648572 +0000 UTC m=+0.093950836 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z) Feb 1 03:45:21 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:45:23 localhost podman[97992]: 2026-02-01 08:45:23.878012963 +0000 UTC m=+0.085932576 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:45:23 localhost podman[97992]: 2026-02-01 08:45:23.908712682 +0000 UTC m=+0.116632295 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 1 03:45:23 localhost podman[97989]: 2026-02-01 08:45:23.924810985 +0000 UTC m=+0.141799911 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64) Feb 1 03:45:23 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:45:23 localhost podman[97989]: 2026-02-01 08:45:23.934531078 +0000 UTC m=+0.151519984 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, release=1766032510, distribution-scope=public, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 1 03:45:23 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:45:23 localhost podman[97990]: 2026-02-01 08:45:23.975134566 +0000 UTC m=+0.189104218 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Feb 1 03:45:24 localhost podman[97990]: 2026-02-01 08:45:24.00052397 +0000 UTC m=+0.214493632 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, release=1766032510, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:45:24 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:45:24 localhost podman[97999]: 2026-02-01 08:45:24.076804723 +0000 UTC m=+0.281904928 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:45:24 localhost podman[97999]: 2026-02-01 08:45:24.100459452 +0000 UTC m=+0.305559587 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:45:24 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:45:24 localhost podman[97991]: 2026-02-01 08:45:24.186387486 +0000 UTC m=+0.396033382 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, config_id=tripleo_step3, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:45:24 localhost podman[97991]: 2026-02-01 08:45:24.199510327 +0000 UTC m=+0.409156223 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, release=1766032510, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:45:24 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:45:28 localhost podman[98107]: 2026-02-01 08:45:28.86160956 +0000 UTC m=+0.078237205 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:45:29 localhost podman[98107]: 2026-02-01 08:45:29.239627409 +0000 UTC m=+0.456255104 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, io.openshift.expose-services=, io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:45:29 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:45:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:45:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:45:30 localhost systemd[1]: tmp-crun.3faqrz.mount: Deactivated successfully. Feb 1 03:45:30 localhost podman[98131]: 2026-02-01 08:45:30.883652919 +0000 UTC m=+0.098218289 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, config_id=tripleo_step4) Feb 1 03:45:30 localhost systemd[1]: tmp-crun.M6Lcy7.mount: Deactivated successfully. Feb 1 03:45:30 localhost podman[98132]: 2026-02-01 08:45:30.929205512 +0000 UTC m=+0.141484562 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510) Feb 1 03:45:30 localhost podman[98131]: 2026-02-01 08:45:30.950717744 +0000 UTC m=+0.165283084 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:45:30 localhost podman[98131]: unhealthy Feb 1 03:45:30 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:45:30 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:45:30 localhost podman[98132]: 2026-02-01 08:45:30.982678602 +0000 UTC m=+0.194957582 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 1 03:45:30 localhost podman[98132]: unhealthy Feb 1 03:45:31 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:45:31 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:45:43 localhost podman[98171]: 2026-02-01 08:45:43.859816631 +0000 UTC m=+0.075896882 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:45:44 localhost podman[98171]: 2026-02-01 08:45:44.054856024 +0000 UTC m=+0.270936245 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:45:44 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:45:53 localhost podman[98200]: 2026-02-01 08:45:53.547876583 +0000 UTC m=+0.081269670 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, url=https://www.redhat.com, container_name=collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:45:53 localhost podman[98200]: 2026-02-01 08:45:53.562644344 +0000 UTC m=+0.096037431 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., version=17.1.13, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3) Feb 1 03:45:53 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:45:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:45:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:45:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:45:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:45:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:45:54 localhost podman[98222]: 2026-02-01 08:45:54.877639385 +0000 UTC m=+0.090311873 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, version=17.1.13, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:45:54 localhost podman[98221]: 2026-02-01 08:45:54.928736991 +0000 UTC m=+0.144362321 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:45:54 localhost podman[98226]: 2026-02-01 08:45:54.995392913 +0000 UTC m=+0.199963568 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:45:55 localhost podman[98226]: 2026-02-01 08:45:55.022560651 +0000 UTC m=+0.227131346 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, distribution-scope=public, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com) Feb 1 03:45:55 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:45:55 localhost podman[98235]: 2026-02-01 08:45:55.040794622 +0000 UTC m=+0.243155178 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_ipmi, release=1766032510, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Feb 1 03:45:55 localhost podman[98223]: 2026-02-01 08:45:55.090231655 +0000 UTC m=+0.299266229 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:45:55 localhost podman[98223]: 2026-02-01 08:45:55.103699246 +0000 UTC m=+0.312733870 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, architecture=x86_64, build-date=2026-01-12T22:34:43Z, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:45:55 localhost podman[98222]: 2026-02-01 08:45:55.110032225 +0000 UTC m=+0.322704653 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z) Feb 1 03:45:55 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:45:55 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:45:55 localhost podman[98235]: 2026-02-01 08:45:55.118557561 +0000 UTC m=+0.320918037 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi) Feb 1 03:45:55 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:45:55 localhost podman[98221]: 2026-02-01 08:45:55.163184645 +0000 UTC m=+0.378809955 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:45:55 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:45:55 localhost systemd[1]: tmp-crun.yA5CZs.mount: Deactivated successfully. Feb 1 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:45:59 localhost podman[98420]: 2026-02-01 08:45:59.875010561 +0000 UTC m=+0.090052704 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.5) Feb 1 03:46:00 localhost podman[98420]: 2026-02-01 08:46:00.257972595 +0000 UTC m=+0.473014708 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:46:00 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:46:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:46:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:46:01 localhost systemd[1]: tmp-crun.4wW9QA.mount: Deactivated successfully. Feb 1 03:46:01 localhost podman[98445]: 2026-02-01 08:46:01.889065959 +0000 UTC m=+0.100495269 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, distribution-scope=public, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:46:01 localhost podman[98445]: 2026-02-01 08:46:01.932363923 +0000 UTC m=+0.143793183 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container) Feb 1 03:46:01 localhost podman[98445]: unhealthy Feb 1 03:46:01 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:46:01 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:46:01 localhost podman[98444]: 2026-02-01 08:46:01.936616755 +0000 UTC m=+0.150172852 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Feb 1 03:46:02 localhost podman[98444]: 2026-02-01 08:46:02.019741642 +0000 UTC m=+0.233297749 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, tcib_managed=true, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:46:02 localhost podman[98444]: unhealthy Feb 1 03:46:02 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:46:02 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:46:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:46:14 localhost podman[98482]: 2026-02-01 08:46:14.867461832 +0000 UTC m=+0.083126916 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, version=17.1.13, container_name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:46:15 localhost podman[98482]: 2026-02-01 08:46:15.058883643 +0000 UTC m=+0.274548717 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64) Feb 1 03:46:15 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:46:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:46:23 localhost systemd[1]: tmp-crun.DNtOEY.mount: Deactivated successfully. Feb 1 03:46:23 localhost podman[98511]: 2026-02-01 08:46:23.867542853 +0000 UTC m=+0.084068567 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, container_name=collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:46:23 localhost podman[98511]: 2026-02-01 08:46:23.876755951 +0000 UTC m=+0.093281595 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, distribution-scope=public, tcib_managed=true, container_name=collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:46:23 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:46:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:46:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:46:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:46:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:46:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:46:25 localhost podman[98533]: 2026-02-01 08:46:25.880022993 +0000 UTC m=+0.091868209 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:46:25 localhost podman[98536]: 2026-02-01 08:46:25.860854445 +0000 UTC m=+0.068058437 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true) Feb 1 03:46:25 localhost podman[98533]: 2026-02-01 08:46:25.918553487 +0000 UTC m=+0.130398673 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1766032510) Feb 1 03:46:25 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:46:25 localhost podman[98542]: 2026-02-01 08:46:25.912383094 +0000 UTC m=+0.114828727 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:46:26 localhost podman[98534]: 2026-02-01 08:46:25.966342319 +0000 UTC m=+0.175624416 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:46:26 localhost podman[98535]: 2026-02-01 08:46:26.024688902 +0000 UTC m=+0.230548962 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.13) Feb 1 03:46:26 localhost podman[98534]: 2026-02-01 08:46:26.044825331 +0000 UTC m=+0.254107408 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, config_id=tripleo_step5, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc.) Feb 1 03:46:26 localhost podman[98542]: 2026-02-01 08:46:26.04702193 +0000 UTC m=+0.249467583 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:46:26 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:46:26 localhost podman[98535]: 2026-02-01 08:46:26.061587215 +0000 UTC m=+0.267447205 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, distribution-scope=public, release=1766032510, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:46:26 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:46:26 localhost podman[98536]: 2026-02-01 08:46:26.095974419 +0000 UTC m=+0.303178431 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public) Feb 1 03:46:26 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:46:26 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:46:30 localhost podman[98645]: 2026-02-01 08:46:30.848985433 +0000 UTC m=+0.068727567 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:46:31 localhost podman[98645]: 2026-02-01 08:46:31.242798406 +0000 UTC m=+0.462540570 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.buildah.version=1.41.5, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team) Feb 1 03:46:31 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:46:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:46:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:46:32 localhost podman[98668]: 2026-02-01 08:46:32.860856344 +0000 UTC m=+0.075901902 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.5, release=1766032510, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:46:32 localhost podman[98668]: 2026-02-01 08:46:32.879861078 +0000 UTC m=+0.094906636 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:46:32 localhost podman[98668]: unhealthy Feb 1 03:46:32 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:46:32 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:46:32 localhost systemd[1]: tmp-crun.WLwa8I.mount: Deactivated successfully. Feb 1 03:46:32 localhost podman[98669]: 2026-02-01 08:46:32.974098042 +0000 UTC m=+0.185645991 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller) Feb 1 03:46:33 localhost podman[98669]: 2026-02-01 08:46:33.010373365 +0000 UTC m=+0.221921314 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller) Feb 1 03:46:33 localhost podman[98669]: unhealthy Feb 1 03:46:33 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:46:33 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:46:40 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:46:40 localhost recover_tripleo_nova_virtqemud[98709]: 62016 Feb 1 03:46:40 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:46:40 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:46:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:46:45 localhost systemd[1]: tmp-crun.nom87f.mount: Deactivated successfully. Feb 1 03:46:45 localhost podman[98710]: 2026-02-01 08:46:45.874511178 +0000 UTC m=+0.092955665 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_id=tripleo_step1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 1 03:46:46 localhost podman[98710]: 2026-02-01 08:46:46.059502557 +0000 UTC m=+0.277947044 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step1) Feb 1 03:46:46 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:46:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:46:55 localhost podman[98739]: 2026-02-01 08:46:55.29847636 +0000 UTC m=+0.514137873 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, architecture=x86_64, release=1766032510, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5) Feb 1 03:46:55 localhost podman[98739]: 2026-02-01 08:46:55.310718022 +0000 UTC m=+0.526379515 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Feb 1 03:46:55 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:46:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:46:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:46:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:46:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:46:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:46:56 localhost podman[98760]: 2026-02-01 08:46:56.898083561 +0000 UTC m=+0.068856642 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_step5, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:46:56 localhost podman[98771]: 2026-02-01 08:46:56.955541356 +0000 UTC m=+0.124648895 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, container_name=iscsid, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., distribution-scope=public) Feb 1 03:46:56 localhost podman[98771]: 2026-02-01 08:46:56.966481258 +0000 UTC m=+0.135588807 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true) Feb 1 03:46:56 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:46:56 localhost podman[98759]: 2026-02-01 08:46:56.878477079 +0000 UTC m=+0.091671615 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com) Feb 1 03:46:56 localhost podman[98773]: 2026-02-01 08:46:56.936219512 +0000 UTC m=+0.097585900 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, release=1766032510) Feb 1 03:46:57 localhost podman[98759]: 2026-02-01 08:46:57.011550635 +0000 UTC m=+0.224745171 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, version=17.1.13, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible) Feb 1 03:46:57 localhost podman[98773]: 2026-02-01 08:46:57.014507838 +0000 UTC m=+0.175874246 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.13, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 1 03:46:57 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:46:57 localhost podman[98760]: 2026-02-01 08:46:57.031330153 +0000 UTC m=+0.202103264 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, tcib_managed=true, version=17.1.13, vcs-type=git, container_name=nova_compute, release=1766032510, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:46:57 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:46:57 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:46:57 localhost podman[98772]: 2026-02-01 08:46:57.11890517 +0000 UTC m=+0.283667383 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, version=17.1.13, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc.) Feb 1 03:46:57 localhost podman[98772]: 2026-02-01 08:46:57.177363935 +0000 UTC m=+0.342126148 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:46:57 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:47:01 localhost podman[98958]: 2026-02-01 08:47:01.899283118 +0000 UTC m=+0.112907008 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:47:02 localhost podman[98958]: 2026-02-01 08:47:02.269270466 +0000 UTC m=+0.482894366 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, container_name=nova_migration_target, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z) Feb 1 03:47:02 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:47:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:47:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:47:03 localhost systemd[1]: tmp-crun.SmyWWB.mount: Deactivated successfully. Feb 1 03:47:03 localhost podman[98981]: 2026-02-01 08:47:03.862777188 +0000 UTC m=+0.077284026 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, container_name=ovn_controller, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:47:03 localhost podman[98981]: 2026-02-01 08:47:03.899891747 +0000 UTC m=+0.114398595 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team) Feb 1 03:47:03 localhost podman[98981]: unhealthy Feb 1 03:47:03 localhost systemd[1]: tmp-crun.63DDDJ.mount: Deactivated successfully. Feb 1 03:47:03 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:47:03 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:47:03 localhost podman[98980]: 2026-02-01 08:47:03.916395312 +0000 UTC m=+0.130649872 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, vcs-type=git) Feb 1 03:47:03 localhost podman[98980]: 2026-02-01 08:47:03.953248714 +0000 UTC m=+0.167503234 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, url=https://www.redhat.com, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510) Feb 1 03:47:03 localhost podman[98980]: unhealthy Feb 1 03:47:03 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:47:03 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:47:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:47:16 localhost podman[99019]: 2026-02-01 08:47:16.867840003 +0000 UTC m=+0.083956504 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc.) Feb 1 03:47:17 localhost podman[99019]: 2026-02-01 08:47:17.069665588 +0000 UTC m=+0.285782129 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, vcs-type=git, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, managed_by=tripleo_ansible, release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:47:17 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:47:25 localhost podman[99049]: 2026-02-01 08:47:25.856007451 +0000 UTC m=+0.072814016 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, version=17.1.13, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:47:25 localhost podman[99049]: 2026-02-01 08:47:25.867629274 +0000 UTC m=+0.084435869 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, release=1766032510, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:47:25 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:47:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:47:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:47:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:47:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:47:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:47:27 localhost podman[99070]: 2026-02-01 08:47:27.866472797 +0000 UTC m=+0.079958108 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:47:27 localhost systemd[1]: tmp-crun.V52r65.mount: Deactivated successfully. Feb 1 03:47:27 localhost podman[99069]: 2026-02-01 08:47:27.935362099 +0000 UTC m=+0.151458952 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Feb 1 03:47:27 localhost podman[99070]: 2026-02-01 08:47:27.950630656 +0000 UTC m=+0.164115917 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:47:27 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:47:27 localhost podman[99069]: 2026-02-01 08:47:27.967771201 +0000 UTC m=+0.183868064 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, release=1766032510, container_name=logrotate_crond, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, distribution-scope=public, version=17.1.13, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:47:27 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:47:28 localhost podman[99072]: 2026-02-01 08:47:28.04166697 +0000 UTC m=+0.248473943 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:47:28 localhost podman[99072]: 2026-02-01 08:47:28.072691669 +0000 UTC m=+0.279498712 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, tcib_managed=true, version=17.1.13) Feb 1 03:47:28 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:47:28 localhost podman[99071]: 2026-02-01 08:47:28.09065714 +0000 UTC m=+0.298687421 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:47:28 localhost podman[99071]: 2026-02-01 08:47:28.101620323 +0000 UTC m=+0.309650594 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, container_name=iscsid, version=17.1.13, vcs-type=git, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid) Feb 1 03:47:28 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:47:28 localhost podman[99078]: 2026-02-01 08:47:27.906991703 +0000 UTC m=+0.112028910 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, maintainer=OpenStack TripleO Team) Feb 1 03:47:28 localhost podman[99078]: 2026-02-01 08:47:28.189669284 +0000 UTC m=+0.394706511 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, batch=17.1_20260112.1) Feb 1 03:47:28 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:47:32 localhost podman[99190]: 2026-02-01 08:47:32.861825332 +0000 UTC m=+0.077898715 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:47:33 localhost podman[99190]: 2026-02-01 08:47:33.223238702 +0000 UTC m=+0.439312085 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:47:33 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:47:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:47:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:47:34 localhost podman[99214]: 2026-02-01 08:47:34.857506176 +0000 UTC m=+0.066756047 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vendor=Red Hat, Inc., container_name=ovn_controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, release=1766032510) Feb 1 03:47:34 localhost podman[99214]: 2026-02-01 08:47:34.872517375 +0000 UTC m=+0.081767236 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, tcib_managed=true, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 1 03:47:34 localhost podman[99214]: unhealthy Feb 1 03:47:34 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:47:34 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:47:34 localhost podman[99213]: 2026-02-01 08:47:34.956772197 +0000 UTC m=+0.168069152 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1) Feb 1 03:47:34 localhost podman[99213]: 2026-02-01 08:47:34.974606514 +0000 UTC m=+0.185903459 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com) Feb 1 03:47:34 localhost podman[99213]: unhealthy Feb 1 03:47:34 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:47:34 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:47:47 localhost podman[99253]: 2026-02-01 08:47:47.864459321 +0000 UTC m=+0.080386543 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.13, build-date=2026-01-12T22:10:14Z, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step1, vendor=Red Hat, Inc., batch=17.1_20260112.1) Feb 1 03:47:48 localhost podman[99253]: 2026-02-01 08:47:48.046678633 +0000 UTC m=+0.262605885 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, tcib_managed=true, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Feb 1 03:47:48 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:47:48 localhost systemd[1]: session-28.scope: Deactivated successfully. Feb 1 03:47:48 localhost systemd[1]: session-28.scope: Consumed 7min 3.530s CPU time. Feb 1 03:47:48 localhost systemd-logind[761]: Session 28 logged out. Waiting for processes to exit. Feb 1 03:47:48 localhost systemd-logind[761]: Removed session 28. Feb 1 03:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:47:56 localhost podman[99283]: 2026-02-01 08:47:56.87313753 +0000 UTC m=+0.088252439 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Feb 1 03:47:56 localhost podman[99283]: 2026-02-01 08:47:56.912674045 +0000 UTC m=+0.127788994 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, vcs-type=git, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13) Feb 1 03:47:56 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:47:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:47:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:47:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:47:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:47:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:47:58 localhost systemd[1]: Stopping User Manager for UID 1003... Feb 1 03:47:58 localhost systemd[35763]: Activating special unit Exit the Session... Feb 1 03:47:58 localhost systemd[35763]: Removed slice User Background Tasks Slice. Feb 1 03:47:58 localhost systemd[35763]: Stopped target Main User Target. Feb 1 03:47:58 localhost systemd[35763]: Stopped target Basic System. Feb 1 03:47:58 localhost systemd[35763]: Stopped target Paths. Feb 1 03:47:58 localhost systemd[35763]: Stopped target Sockets. Feb 1 03:47:58 localhost systemd[35763]: Stopped target Timers. Feb 1 03:47:58 localhost systemd[35763]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 1 03:47:58 localhost systemd[35763]: Stopped Daily Cleanup of User's Temporary Directories. Feb 1 03:47:58 localhost systemd[35763]: Closed D-Bus User Message Bus Socket. Feb 1 03:47:58 localhost systemd[35763]: Stopped Create User's Volatile Files and Directories. Feb 1 03:47:58 localhost systemd[35763]: Removed slice User Application Slice. Feb 1 03:47:58 localhost systemd[35763]: Reached target Shutdown. Feb 1 03:47:58 localhost systemd[35763]: Finished Exit the Session. Feb 1 03:47:58 localhost systemd[35763]: Reached target Exit the Session. Feb 1 03:47:58 localhost systemd[1]: user@1003.service: Deactivated successfully. Feb 1 03:47:58 localhost systemd[1]: Stopped User Manager for UID 1003. Feb 1 03:47:58 localhost systemd[1]: user@1003.service: Consumed 4.621s CPU time. Feb 1 03:47:58 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Feb 1 03:47:58 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Feb 1 03:47:58 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Feb 1 03:47:58 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Feb 1 03:47:58 localhost systemd[1]: Removed slice User Slice of UID 1003. Feb 1 03:47:58 localhost systemd[1]: user-1003.slice: Consumed 7min 8.178s CPU time. Feb 1 03:47:58 localhost systemd[1]: tmp-crun.Uc5l0y.mount: Deactivated successfully. Feb 1 03:47:58 localhost podman[99305]: 2026-02-01 08:47:58.653525488 +0000 UTC m=+0.104202666 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, tcib_managed=true, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:47:58 localhost podman[99305]: 2026-02-01 08:47:58.695730226 +0000 UTC m=+0.146407434 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, version=17.1.13, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid) Feb 1 03:47:58 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:47:58 localhost podman[99303]: 2026-02-01 08:47:58.744124369 +0000 UTC m=+0.200802324 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, vcs-type=git, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Feb 1 03:47:58 localhost podman[99303]: 2026-02-01 08:47:58.751723326 +0000 UTC m=+0.208401311 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, release=1766032510, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.buildah.version=1.41.5, vendor=Red Hat, Inc.) Feb 1 03:47:58 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:47:58 localhost podman[99306]: 2026-02-01 08:47:58.799494718 +0000 UTC m=+0.248012708 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2026-01-12T23:07:47Z, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:47:58 localhost podman[99304]: 2026-02-01 08:47:58.701973612 +0000 UTC m=+0.154893090 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1766032510, container_name=nova_compute, config_id=tripleo_step5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 1 03:47:58 localhost podman[99312]: 2026-02-01 08:47:58.670391825 +0000 UTC m=+0.115964953 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, version=17.1.13, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:47:58 localhost podman[99306]: 2026-02-01 08:47:58.828332049 +0000 UTC m=+0.276850050 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4) Feb 1 03:47:58 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:47:58 localhost podman[99312]: 2026-02-01 08:47:58.859677128 +0000 UTC m=+0.305250306 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, batch=17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 1 03:47:58 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:47:58 localhost podman[99304]: 2026-02-01 08:47:58.913390556 +0000 UTC m=+0.366310014 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step5, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:47:58 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:47:59 localhost systemd[1]: tmp-crun.c9arQz.mount: Deactivated successfully. Feb 1 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:48:03 localhost podman[99490]: 2026-02-01 08:48:03.86327652 +0000 UTC m=+0.080284679 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, version=17.1.13, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true) Feb 1 03:48:04 localhost podman[99490]: 2026-02-01 08:48:04.237764129 +0000 UTC m=+0.454772308 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1766032510, container_name=nova_migration_target, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13) Feb 1 03:48:04 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:48:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:48:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:48:05 localhost systemd[1]: tmp-crun.wlV0N1.mount: Deactivated successfully. Feb 1 03:48:05 localhost podman[99514]: 2026-02-01 08:48:05.876323369 +0000 UTC m=+0.083438788 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ovn_controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5) Feb 1 03:48:05 localhost podman[99514]: 2026-02-01 08:48:05.895630271 +0000 UTC m=+0.102745650 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.buildah.version=1.41.5, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 1 03:48:05 localhost podman[99514]: unhealthy Feb 1 03:48:05 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:48:05 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:48:05 localhost podman[99513]: 2026-02-01 08:48:05.973334959 +0000 UTC m=+0.184471094 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, tcib_managed=true, version=17.1.13, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:48:06 localhost podman[99513]: 2026-02-01 08:48:06.015184266 +0000 UTC m=+0.226320411 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com) Feb 1 03:48:06 localhost podman[99513]: unhealthy Feb 1 03:48:06 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:48:06 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:48:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:48:18 localhost podman[99552]: 2026-02-01 08:48:18.868345247 +0000 UTC m=+0.083898482 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z) Feb 1 03:48:19 localhost podman[99552]: 2026-02-01 08:48:19.144064661 +0000 UTC m=+0.359617906 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:48:19 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:48:20 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:48:20 localhost recover_tripleo_nova_virtqemud[99582]: 62016 Feb 1 03:48:20 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:48:20 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:48:27 localhost podman[99583]: 2026-02-01 08:48:27.859129507 +0000 UTC m=+0.077585675 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510) Feb 1 03:48:27 localhost podman[99583]: 2026-02-01 08:48:27.871647647 +0000 UTC m=+0.090103795 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:48:27 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:48:28 localhost podman[99603]: 2026-02-01 08:48:28.867237339 +0000 UTC m=+0.081185156 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z) Feb 1 03:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:48:28 localhost podman[99603]: 2026-02-01 08:48:28.903972718 +0000 UTC m=+0.117920495 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, container_name=iscsid, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:48:28 localhost systemd[1]: tmp-crun.TsRp13.mount: Deactivated successfully. Feb 1 03:48:28 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:48:28 localhost podman[99604]: 2026-02-01 08:48:28.929045631 +0000 UTC m=+0.139168688 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron) Feb 1 03:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:48:28 localhost podman[99604]: 2026-02-01 08:48:28.967596265 +0000 UTC m=+0.177719322 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com) Feb 1 03:48:28 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:48:29 localhost podman[99634]: 2026-02-01 08:48:28.991142901 +0000 UTC m=+0.096347632 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, architecture=x86_64) Feb 1 03:48:29 localhost podman[99634]: 2026-02-01 08:48:29.077705545 +0000 UTC m=+0.182910276 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 1 03:48:29 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:48:29 localhost podman[99663]: 2026-02-01 08:48:29.092637211 +0000 UTC m=+0.138373823 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:48:29 localhost podman[99633]: 2026-02-01 08:48:29.056865814 +0000 UTC m=+0.167878096 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Feb 1 03:48:29 localhost podman[99663]: 2026-02-01 08:48:29.12110309 +0000 UTC m=+0.166839752 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:48:29 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:48:29 localhost podman[99633]: 2026-02-01 08:48:29.142742807 +0000 UTC m=+0.253755119 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true) Feb 1 03:48:29 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:48:29 localhost systemd[1]: tmp-crun.KmrEQi.mount: Deactivated successfully. Feb 1 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:48:34 localhost podman[99722]: 2026-02-01 08:48:34.858976259 +0000 UTC m=+0.075474786 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:48:35 localhost podman[99722]: 2026-02-01 08:48:35.215710369 +0000 UTC m=+0.432208976 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13) Feb 1 03:48:35 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:48:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:48:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:48:36 localhost podman[99745]: 2026-02-01 08:48:36.838636421 +0000 UTC m=+0.062498960 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1766032510, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:48:36 localhost podman[99746]: 2026-02-01 08:48:36.84852524 +0000 UTC m=+0.068516659 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=) Feb 1 03:48:36 localhost podman[99745]: 2026-02-01 08:48:36.858313635 +0000 UTC m=+0.082176144 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 1 03:48:36 localhost podman[99745]: unhealthy Feb 1 03:48:36 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:48:36 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:48:36 localhost podman[99746]: 2026-02-01 08:48:36.914376114 +0000 UTC m=+0.134367573 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z) Feb 1 03:48:36 localhost podman[99746]: unhealthy Feb 1 03:48:36 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:48:36 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:48:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:48:49 localhost podman[99787]: 2026-02-01 08:48:49.875348126 +0000 UTC m=+0.091438644 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, release=1766032510, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5) Feb 1 03:48:50 localhost podman[99787]: 2026-02-01 08:48:50.063001831 +0000 UTC m=+0.279092339 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.expose-services=) Feb 1 03:48:50 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:48:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:48:58 localhost podman[99816]: 2026-02-01 08:48:58.863459801 +0000 UTC m=+0.079984316 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Feb 1 03:48:58 localhost podman[99816]: 2026-02-01 08:48:58.898682681 +0000 UTC m=+0.115207156 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-type=git) Feb 1 03:48:58 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:48:59 localhost podman[99845]: 2026-02-01 08:48:59.88014009 +0000 UTC m=+0.085075625 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, tcib_managed=true, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:48:59 localhost podman[99845]: 2026-02-01 08:48:59.931634626 +0000 UTC m=+0.136570131 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, build-date=2026-01-12T23:07:30Z, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:48:59 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:48:59 localhost podman[99837]: 2026-02-01 08:48:59.934370542 +0000 UTC m=+0.148147313 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:49:00 localhost podman[99836]: 2026-02-01 08:48:59.985956502 +0000 UTC m=+0.203482860 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13) Feb 1 03:49:00 localhost podman[99838]: 2026-02-01 08:49:00.041534206 +0000 UTC m=+0.251837878 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3) Feb 1 03:49:00 localhost podman[99838]: 2026-02-01 08:49:00.079666225 +0000 UTC m=+0.289969897 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, container_name=iscsid, batch=17.1_20260112.1, config_id=tripleo_step3, version=17.1.13) Feb 1 03:49:00 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:49:00 localhost podman[99839]: 2026-02-01 08:49:00.091229176 +0000 UTC m=+0.300064813 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, url=https://www.redhat.com, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 1 03:49:00 localhost podman[99837]: 2026-02-01 08:49:00.115007188 +0000 UTC m=+0.328783959 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:32:04Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:49:00 localhost podman[99836]: 2026-02-01 08:49:00.117747643 +0000 UTC m=+0.335274031 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, vcs-type=git, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com) Feb 1 03:49:00 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:49:00 localhost podman[99839]: 2026-02-01 08:49:00.17278187 +0000 UTC m=+0.381617457 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64) Feb 1 03:49:00 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:49:00 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:49:05 localhost systemd[1]: tmp-crun.WrXfh1.mount: Deactivated successfully. Feb 1 03:49:05 localhost podman[100031]: 2026-02-01 08:49:05.871014636 +0000 UTC m=+0.088587506 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4) Feb 1 03:49:06 localhost podman[100031]: 2026-02-01 08:49:06.264103869 +0000 UTC m=+0.481676739 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team) Feb 1 03:49:06 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:49:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:49:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:49:07 localhost podman[100055]: 2026-02-01 08:49:07.872149598 +0000 UTC m=+0.081623268 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, release=1766032510, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:49:07 localhost podman[100055]: 2026-02-01 08:49:07.888174878 +0000 UTC m=+0.097648588 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:49:07 localhost podman[100055]: unhealthy Feb 1 03:49:07 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:49:07 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:49:07 localhost systemd[1]: tmp-crun.iutctQ.mount: Deactivated successfully. Feb 1 03:49:07 localhost podman[100054]: 2026-02-01 08:49:07.935664289 +0000 UTC m=+0.145740198 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, container_name=ovn_metadata_agent, release=1766032510, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:49:07 localhost podman[100054]: 2026-02-01 08:49:07.951792422 +0000 UTC m=+0.161868421 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2026-01-12T22:56:19Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:49:07 localhost podman[100054]: unhealthy Feb 1 03:49:07 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:49:07 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:49:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:49:20 localhost podman[100093]: 2026-02-01 08:49:20.872518907 +0000 UTC m=+0.085057805 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc.) Feb 1 03:49:21 localhost podman[100093]: 2026-02-01 08:49:21.089705143 +0000 UTC m=+0.302244021 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, io.buildah.version=1.41.5, vcs-type=git) Feb 1 03:49:21 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:49:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:49:29 localhost systemd[1]: tmp-crun.AVlp9m.mount: Deactivated successfully. Feb 1 03:49:29 localhost podman[100121]: 2026-02-01 08:49:29.882260708 +0000 UTC m=+0.094156249 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, io.openshift.expose-services=, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13) Feb 1 03:49:29 localhost podman[100121]: 2026-02-01 08:49:29.89162745 +0000 UTC m=+0.103522951 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, release=1766032510, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:49:29 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:49:30 localhost systemd[1]: tmp-crun.q2aDMt.mount: Deactivated successfully. Feb 1 03:49:30 localhost podman[100143]: 2026-02-01 08:49:30.889275534 +0000 UTC m=+0.098781282 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:49:30 localhost podman[100143]: 2026-02-01 08:49:30.894486707 +0000 UTC m=+0.103992455 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com) Feb 1 03:49:30 localhost systemd[1]: tmp-crun.Skafwu.mount: Deactivated successfully. Feb 1 03:49:30 localhost podman[100157]: 2026-02-01 08:49:30.907157323 +0000 UTC m=+0.098651450 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, tcib_managed=true) Feb 1 03:49:30 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:49:30 localhost podman[100157]: 2026-02-01 08:49:30.935508697 +0000 UTC m=+0.127002814 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:49:30 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:49:30 localhost podman[100145]: 2026-02-01 08:49:30.939597744 +0000 UTC m=+0.140581316 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, vcs-type=git, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Feb 1 03:49:30 localhost podman[100144]: 2026-02-01 08:49:30.995931882 +0000 UTC m=+0.202089416 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_id=tripleo_step5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, architecture=x86_64, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:49:31 localhost podman[100151]: 2026-02-01 08:49:31.043364412 +0000 UTC m=+0.240742382 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true) Feb 1 03:49:31 localhost podman[100145]: 2026-02-01 08:49:31.068939749 +0000 UTC m=+0.269923331 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., container_name=iscsid, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Feb 1 03:49:31 localhost podman[100144]: 2026-02-01 08:49:31.075264857 +0000 UTC m=+0.281422391 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_id=tripleo_step5, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true) Feb 1 03:49:31 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:49:31 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:49:31 localhost podman[100151]: 2026-02-01 08:49:31.096886131 +0000 UTC m=+0.294264101 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:49:31 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:49:36 localhost podman[100257]: 2026-02-01 08:49:36.863519821 +0000 UTC m=+0.078050716 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, release=1766032510) Feb 1 03:49:37 localhost podman[100257]: 2026-02-01 08:49:37.233878086 +0000 UTC m=+0.448409021 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:49:37 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:49:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:49:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:49:38 localhost podman[100280]: 2026-02-01 08:49:38.879247548 +0000 UTC m=+0.081037499 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, url=https://www.redhat.com, container_name=ovn_controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4) Feb 1 03:49:38 localhost podman[100280]: 2026-02-01 08:49:38.926077079 +0000 UTC m=+0.127867020 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, config_id=tripleo_step4) Feb 1 03:49:38 localhost podman[100280]: unhealthy Feb 1 03:49:38 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:49:38 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:49:38 localhost podman[100279]: 2026-02-01 08:49:38.92897959 +0000 UTC m=+0.132367361 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, container_name=ovn_metadata_agent, distribution-scope=public, config_id=tripleo_step4, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=) Feb 1 03:49:39 localhost podman[100279]: 2026-02-01 08:49:39.013767315 +0000 UTC m=+0.217155056 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Feb 1 03:49:39 localhost podman[100279]: unhealthy Feb 1 03:49:39 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:49:39 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:49:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:49:51 localhost podman[100321]: 2026-02-01 08:49:51.856412865 +0000 UTC m=+0.074870037 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:49:52 localhost podman[100321]: 2026-02-01 08:49:52.069577226 +0000 UTC m=+0.288034398 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1766032510, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:49:52 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:50:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:50:00 localhost systemd[1]: tmp-crun.YYVVlp.mount: Deactivated successfully. Feb 1 03:50:00 localhost podman[100351]: 2026-02-01 08:50:00.884454686 +0000 UTC m=+0.091701111 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-collectd-container, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:50:00 localhost podman[100351]: 2026-02-01 08:50:00.89641076 +0000 UTC m=+0.103657195 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, release=1766032510, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git) Feb 1 03:50:00 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:50:01 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:50:01 localhost recover_tripleo_nova_virtqemud[100404]: 62016 Feb 1 03:50:01 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:50:01 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:50:01 localhost systemd[1]: tmp-crun.P3jjcv.mount: Deactivated successfully. Feb 1 03:50:01 localhost podman[100371]: 2026-02-01 08:50:01.884543807 +0000 UTC m=+0.106532515 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.13, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, vcs-type=git, distribution-scope=public) Feb 1 03:50:01 localhost podman[100372]: 2026-02-01 08:50:01.895716836 +0000 UTC m=+0.109820187 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 1 03:50:01 localhost podman[100371]: 2026-02-01 08:50:01.920237801 +0000 UTC m=+0.142226479 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, tcib_managed=true, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_id=tripleo_step4, distribution-scope=public) Feb 1 03:50:01 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:50:01 localhost podman[100373]: 2026-02-01 08:50:01.93653562 +0000 UTC m=+0.149475694 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.openshift.expose-services=, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Feb 1 03:50:01 localhost podman[100372]: 2026-02-01 08:50:01.957680669 +0000 UTC m=+0.171784040 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, build-date=2026-01-12T23:32:04Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, container_name=nova_compute) Feb 1 03:50:01 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:50:01 localhost podman[100375]: 2026-02-01 08:50:01.977314521 +0000 UTC m=+0.189934166 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, release=1766032510, distribution-scope=public, vcs-type=git, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:50:02 localhost podman[100373]: 2026-02-01 08:50:01.996039796 +0000 UTC m=+0.208979840 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:50:02 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:50:02 localhost podman[100375]: 2026-02-01 08:50:02.055869693 +0000 UTC m=+0.268489278 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z) Feb 1 03:50:02 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:50:02 localhost podman[100380]: 2026-02-01 08:50:02.075862987 +0000 UTC m=+0.286593213 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, release=1766032510, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 1 03:50:02 localhost podman[100380]: 2026-02-01 08:50:02.101402173 +0000 UTC m=+0.312132429 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64) Feb 1 03:50:02 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:50:02 localhost systemd[1]: tmp-crun.V8kZ54.mount: Deactivated successfully. Feb 1 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:50:07 localhost podman[100566]: 2026-02-01 08:50:07.869840739 +0000 UTC m=+0.082146673 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:50:08 localhost podman[100566]: 2026-02-01 08:50:08.244742146 +0000 UTC m=+0.457048060 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, vendor=Red Hat, Inc.) Feb 1 03:50:08 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:50:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:50:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:50:09 localhost podman[100590]: 2026-02-01 08:50:09.885759854 +0000 UTC m=+0.095985437 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, release=1766032510, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:50:09 localhost podman[100589]: 2026-02-01 08:50:09.927497725 +0000 UTC m=+0.140369500 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.13, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:50:09 localhost podman[100589]: 2026-02-01 08:50:09.949750469 +0000 UTC m=+0.162622274 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 1 03:50:09 localhost podman[100589]: unhealthy Feb 1 03:50:09 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:50:09 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:50:09 localhost podman[100590]: 2026-02-01 08:50:09.979404975 +0000 UTC m=+0.189630498 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:50:09 localhost podman[100590]: unhealthy Feb 1 03:50:09 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:50:09 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:50:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:50:22 localhost podman[100629]: 2026-02-01 08:50:22.873002704 +0000 UTC m=+0.090838226 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com) Feb 1 03:50:23 localhost podman[100629]: 2026-02-01 08:50:23.059805981 +0000 UTC m=+0.277641523 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.41.5, vendor=Red Hat, Inc., container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public) Feb 1 03:50:23 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:50:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:50:31 localhost podman[100658]: 2026-02-01 08:50:31.870045297 +0000 UTC m=+0.074714922 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, release=1766032510, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.41.5) Feb 1 03:50:31 localhost podman[100658]: 2026-02-01 08:50:31.882465455 +0000 UTC m=+0.087135070 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, container_name=collectd, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 1 03:50:31 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:50:32 localhost systemd[1]: tmp-crun.r94fDt.mount: Deactivated successfully. Feb 1 03:50:32 localhost podman[100679]: 2026-02-01 08:50:32.8775389 +0000 UTC m=+0.087981027 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 1 03:50:32 localhost systemd[1]: tmp-crun.7E7S4u.mount: Deactivated successfully. Feb 1 03:50:32 localhost podman[100678]: 2026-02-01 08:50:32.927035803 +0000 UTC m=+0.140767972 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, container_name=logrotate_crond) Feb 1 03:50:32 localhost podman[100678]: 2026-02-01 08:50:32.936552951 +0000 UTC m=+0.150285050 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20260112.1, tcib_managed=true, release=1766032510, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, version=17.1.13, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:50:32 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:50:32 localhost podman[100679]: 2026-02-01 08:50:32.953122738 +0000 UTC m=+0.163564835 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:50:32 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:50:32 localhost podman[100686]: 2026-02-01 08:50:32.99069732 +0000 UTC m=+0.190510695 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:50:33 localhost podman[100680]: 2026-02-01 08:50:32.906424041 +0000 UTC m=+0.111047886 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, distribution-scope=public) Feb 1 03:50:33 localhost podman[100680]: 2026-02-01 08:50:33.039688569 +0000 UTC m=+0.244312094 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, release=1766032510, vcs-type=git) Feb 1 03:50:33 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:50:33 localhost podman[100692]: 2026-02-01 08:50:33.092144775 +0000 UTC m=+0.286703706 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:50:33 localhost podman[100686]: 2026-02-01 08:50:33.098369849 +0000 UTC m=+0.298183254 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, release=1766032510, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 1 03:50:33 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:50:33 localhost podman[100692]: 2026-02-01 08:50:33.122580314 +0000 UTC m=+0.317139285 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:50:33 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:50:38 localhost podman[100794]: 2026-02-01 08:50:38.865047711 +0000 UTC m=+0.083251649 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute) Feb 1 03:50:39 localhost podman[100794]: 2026-02-01 08:50:39.251871799 +0000 UTC m=+0.470075677 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, container_name=nova_migration_target, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:50:39 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:50:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:50:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:50:40 localhost podman[100817]: 2026-02-01 08:50:40.865684148 +0000 UTC m=+0.079084969 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, architecture=x86_64, release=1766032510) Feb 1 03:50:40 localhost podman[100817]: 2026-02-01 08:50:40.879748186 +0000 UTC m=+0.093149037 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, tcib_managed=true, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z) Feb 1 03:50:40 localhost podman[100817]: unhealthy Feb 1 03:50:40 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:50:40 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:50:40 localhost podman[100818]: 2026-02-01 08:50:40.931546553 +0000 UTC m=+0.141476706 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:50:40 localhost podman[100818]: 2026-02-01 08:50:40.945418756 +0000 UTC m=+0.155348899 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:50:40 localhost podman[100818]: unhealthy Feb 1 03:50:40 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:50:40 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:50:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:50:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5433 writes, 23K keys, 5433 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5433 writes, 751 syncs, 7.23 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:50:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 03:50:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5223 writes, 23K keys, 5223 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5223 writes, 658 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 03:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:50:54 localhost podman[100858]: 2026-02-01 08:50:54.339499868 +0000 UTC m=+0.083014771 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vendor=Red Hat, Inc., container_name=metrics_qdr, managed_by=tripleo_ansible, config_id=tripleo_step1, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Feb 1 03:50:54 localhost podman[100858]: 2026-02-01 08:50:54.553754822 +0000 UTC m=+0.297269695 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, container_name=metrics_qdr, io.buildah.version=1.41.5) Feb 1 03:50:54 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:51:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:51:02 localhost systemd[1]: tmp-crun.4VEgP1.mount: Deactivated successfully. Feb 1 03:51:02 localhost podman[100887]: 2026-02-01 08:51:02.872997748 +0000 UTC m=+0.084225638 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container) Feb 1 03:51:02 localhost podman[100887]: 2026-02-01 08:51:02.911624404 +0000 UTC m=+0.122852234 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:51:02 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:51:03 localhost podman[100909]: 2026-02-01 08:51:03.856672517 +0000 UTC m=+0.068773586 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, container_name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 1 03:51:03 localhost podman[100909]: 2026-02-01 08:51:03.866783253 +0000 UTC m=+0.078884312 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:51:03 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:51:03 localhost podman[100916]: 2026-02-01 08:51:03.908626668 +0000 UTC m=+0.117093734 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1766032510, version=17.1.13, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:51:03 localhost systemd[1]: tmp-crun.4clC77.mount: Deactivated successfully. Feb 1 03:51:03 localhost podman[100908]: 2026-02-01 08:51:03.930549883 +0000 UTC m=+0.144074647 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64) Feb 1 03:51:03 localhost podman[100911]: 2026-02-01 08:51:03.993017721 +0000 UTC m=+0.198736951 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, build-date=2026-01-12T23:07:47Z, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 1 03:51:04 localhost podman[100908]: 2026-02-01 08:51:04.011583751 +0000 UTC m=+0.225108565 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:51:04 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:51:04 localhost podman[100911]: 2026-02-01 08:51:04.023656687 +0000 UTC m=+0.229375857 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:51:04 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:51:04 localhost podman[100916]: 2026-02-01 08:51:04.061871899 +0000 UTC m=+0.270338965 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 1 03:51:04 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:51:04 localhost podman[100907]: 2026-02-01 08:51:03.962182949 +0000 UTC m=+0.179818251 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public) Feb 1 03:51:04 localhost podman[100907]: 2026-02-01 08:51:04.145695124 +0000 UTC m=+0.363330486 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, tcib_managed=true, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.13, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible) Feb 1 03:51:04 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:51:09 localhost podman[101104]: 2026-02-01 08:51:09.872603156 +0000 UTC m=+0.086053746 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:51:10 localhost podman[101104]: 2026-02-01 08:51:10.258524826 +0000 UTC m=+0.471975376 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4) Feb 1 03:51:10 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:51:11 localhost podman[101128]: 2026-02-01 08:51:11.871434697 +0000 UTC m=+0.080590115 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:51:11 localhost podman[101128]: 2026-02-01 08:51:11.917621458 +0000 UTC m=+0.126776846 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=ovn_controller, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 1 03:51:11 localhost podman[101128]: unhealthy Feb 1 03:51:11 localhost systemd[1]: tmp-crun.0cg4Fs.mount: Deactivated successfully. Feb 1 03:51:11 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:51:11 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:51:11 localhost podman[101127]: 2026-02-01 08:51:11.938400006 +0000 UTC m=+0.149550096 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:51:11 localhost podman[101127]: 2026-02-01 08:51:11.956604785 +0000 UTC m=+0.167754875 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 1 03:51:11 localhost podman[101127]: unhealthy Feb 1 03:51:11 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:51:11 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:51:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:51:24 localhost podman[101170]: 2026-02-01 08:51:24.874854041 +0000 UTC m=+0.089586566 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:51:25 localhost podman[101170]: 2026-02-01 08:51:25.104858206 +0000 UTC m=+0.319590701 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510) Feb 1 03:51:25 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:51:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:51:30 localhost recover_tripleo_nova_virtqemud[101200]: 62016 Feb 1 03:51:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:51:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:51:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:51:33 localhost podman[101201]: 2026-02-01 08:51:33.866162875 +0000 UTC m=+0.080967697 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, container_name=collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:51:33 localhost podman[101201]: 2026-02-01 08:51:33.873926297 +0000 UTC m=+0.088731109 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container) Feb 1 03:51:33 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:51:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:51:33 localhost systemd[1]: tmp-crun.1InDP2.mount: Deactivated successfully. Feb 1 03:51:33 localhost podman[101220]: 2026-02-01 08:51:33.990181004 +0000 UTC m=+0.082469164 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, container_name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, version=17.1.13) Feb 1 03:51:34 localhost podman[101220]: 2026-02-01 08:51:34.00095338 +0000 UTC m=+0.093241570 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, container_name=iscsid, architecture=x86_64, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13) Feb 1 03:51:34 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:51:34 localhost podman[101238]: 2026-02-01 08:51:34.87533294 +0000 UTC m=+0.088856054 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, version=17.1.13, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:51:34 localhost podman[101238]: 2026-02-01 08:51:34.911608171 +0000 UTC m=+0.125131195 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:51:34 localhost systemd[1]: tmp-crun.n6dGij.mount: Deactivated successfully. Feb 1 03:51:34 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:51:34 localhost podman[101239]: 2026-02-01 08:51:34.924834183 +0000 UTC m=+0.134719794 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Feb 1 03:51:34 localhost podman[101240]: 2026-02-01 08:51:34.96190012 +0000 UTC m=+0.168650282 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 1 03:51:34 localhost podman[101239]: 2026-02-01 08:51:34.997259693 +0000 UTC m=+0.207145304 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, tcib_managed=true, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible) Feb 1 03:51:35 localhost podman[101240]: 2026-02-01 08:51:35.004240471 +0000 UTC m=+0.210990623 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, architecture=x86_64, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=) Feb 1 03:51:35 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:51:35 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:51:35 localhost podman[101246]: 2026-02-01 08:51:35.034461694 +0000 UTC m=+0.235801378 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, release=1766032510) Feb 1 03:51:35 localhost podman[101246]: 2026-02-01 08:51:35.084740523 +0000 UTC m=+0.286080237 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, release=1766032510, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4) Feb 1 03:51:35 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:51:40 localhost podman[101332]: 2026-02-01 08:51:40.86699688 +0000 UTC m=+0.080140562 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13) Feb 1 03:51:41 localhost podman[101332]: 2026-02-01 08:51:41.237409276 +0000 UTC m=+0.450552928 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, container_name=nova_migration_target) Feb 1 03:51:41 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:51:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:51:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:51:42 localhost podman[101357]: 2026-02-01 08:51:42.871912579 +0000 UTC m=+0.084995372 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:51:42 localhost podman[101357]: 2026-02-01 08:51:42.919720221 +0000 UTC m=+0.132802964 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, container_name=ovn_metadata_agent, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Feb 1 03:51:42 localhost podman[101357]: unhealthy Feb 1 03:51:42 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:51:42 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:51:42 localhost podman[101358]: 2026-02-01 08:51:42.926854564 +0000 UTC m=+0.136252732 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, distribution-scope=public) Feb 1 03:51:43 localhost podman[101358]: 2026-02-01 08:51:43.009844653 +0000 UTC m=+0.219242841 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4) Feb 1 03:51:43 localhost podman[101358]: unhealthy Feb 1 03:51:43 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:51:43 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:51:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:51:55 localhost podman[101397]: 2026-02-01 08:51:55.863313271 +0000 UTC m=+0.079273035 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_id=tripleo_step1, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510) Feb 1 03:51:56 localhost podman[101397]: 2026-02-01 08:51:56.078984219 +0000 UTC m=+0.294943983 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., url=https://www.redhat.com) Feb 1 03:51:56 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:52:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:52:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:52:04 localhost systemd[1]: tmp-crun.mQxsQ8.mount: Deactivated successfully. Feb 1 03:52:04 localhost podman[101427]: 2026-02-01 08:52:04.855981545 +0000 UTC m=+0.070769408 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 1 03:52:04 localhost podman[101427]: 2026-02-01 08:52:04.865413939 +0000 UTC m=+0.080201832 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-type=git, release=1766032510, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, build-date=2026-01-12T22:34:43Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid) Feb 1 03:52:04 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:52:04 localhost podman[101428]: 2026-02-01 08:52:04.934488375 +0000 UTC m=+0.145656186 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:52:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:52:04 localhost podman[101428]: 2026-02-01 08:52:04.973715279 +0000 UTC m=+0.184883130 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 1 03:52:04 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:52:05 localhost podman[101466]: 2026-02-01 08:52:05.035389853 +0000 UTC m=+0.081731611 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, version=17.1.13, io.openshift.expose-services=, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container) Feb 1 03:52:05 localhost podman[101482]: 2026-02-01 08:52:05.106946425 +0000 UTC m=+0.074401332 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, version=17.1.13, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:52:05 localhost podman[101485]: 2026-02-01 08:52:05.156182971 +0000 UTC m=+0.119102187 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, url=https://www.redhat.com) Feb 1 03:52:05 localhost podman[101482]: 2026-02-01 08:52:05.163765448 +0000 UTC m=+0.131220335 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13) Feb 1 03:52:05 localhost podman[101466]: 2026-02-01 08:52:05.179406945 +0000 UTC m=+0.225748773 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, tcib_managed=true, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, architecture=x86_64, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team) Feb 1 03:52:05 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:52:05 localhost podman[101485]: 2026-02-01 08:52:05.189870492 +0000 UTC m=+0.152789698 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, distribution-scope=public, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 1 03:52:05 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:52:05 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:52:05 localhost podman[101526]: 2026-02-01 08:52:05.250794973 +0000 UTC m=+0.092896229 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64) Feb 1 03:52:05 localhost podman[101526]: 2026-02-01 08:52:05.28277781 +0000 UTC m=+0.124879116 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 1 03:52:05 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:52:11 localhost podman[101645]: 2026-02-01 08:52:11.862247339 +0000 UTC m=+0.079016326 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=) Feb 1 03:52:12 localhost podman[101645]: 2026-02-01 08:52:12.231082966 +0000 UTC m=+0.447851993 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:52:12 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:52:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:52:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:52:13 localhost podman[101669]: 2026-02-01 08:52:13.87231138 +0000 UTC m=+0.089398880 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:52:13 localhost podman[101669]: 2026-02-01 08:52:13.915754996 +0000 UTC m=+0.132842426 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:36:40Z, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container) Feb 1 03:52:13 localhost podman[101669]: unhealthy Feb 1 03:52:13 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:52:13 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:52:13 localhost podman[101668]: 2026-02-01 08:52:13.938182126 +0000 UTC m=+0.157043861 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64) Feb 1 03:52:13 localhost podman[101668]: 2026-02-01 08:52:13.947528017 +0000 UTC m=+0.166389802 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Feb 1 03:52:13 localhost podman[101668]: unhealthy Feb 1 03:52:13 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:52:13 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:52:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:52:26 localhost podman[101707]: 2026-02-01 08:52:26.86974012 +0000 UTC m=+0.084998593 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, config_id=tripleo_step1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13) Feb 1 03:52:27 localhost podman[101707]: 2026-02-01 08:52:27.062928167 +0000 UTC m=+0.278186630 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, vcs-type=git, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:52:27 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:52:35 localhost podman[101736]: 2026-02-01 08:52:35.886214019 +0000 UTC m=+0.100180626 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team) Feb 1 03:52:35 localhost systemd[1]: tmp-crun.5skrqR.mount: Deactivated successfully. Feb 1 03:52:35 localhost podman[101736]: 2026-02-01 08:52:35.973599765 +0000 UTC m=+0.187566372 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:52:35 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:52:35 localhost podman[101739]: 2026-02-01 08:52:35.996138219 +0000 UTC m=+0.202648634 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container) Feb 1 03:52:36 localhost podman[101738]: 2026-02-01 08:52:36.041187004 +0000 UTC m=+0.250558429 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, container_name=iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, distribution-scope=public, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:52:36 localhost podman[101739]: 2026-02-01 08:52:36.053951522 +0000 UTC m=+0.260461977 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container) Feb 1 03:52:36 localhost podman[101750]: 2026-02-01 08:52:35.959383772 +0000 UTC m=+0.158953450 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64) Feb 1 03:52:36 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:52:36 localhost podman[101738]: 2026-02-01 08:52:36.07503806 +0000 UTC m=+0.284409535 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, container_name=iscsid, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, version=17.1.13) Feb 1 03:52:36 localhost podman[101750]: 2026-02-01 08:52:36.089242983 +0000 UTC m=+0.288812621 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Feb 1 03:52:36 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:52:36 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:52:36 localhost podman[101737]: 2026-02-01 08:52:35.958576236 +0000 UTC m=+0.171931155 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:52:36 localhost podman[101737]: 2026-02-01 08:52:36.145782627 +0000 UTC m=+0.359137506 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step5, version=17.1.13, container_name=nova_compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Feb 1 03:52:36 localhost podman[101745]: 2026-02-01 08:52:36.153624752 +0000 UTC m=+0.353136069 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z) Feb 1 03:52:36 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:52:36 localhost podman[101745]: 2026-02-01 08:52:36.187610152 +0000 UTC m=+0.387121509 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T23:07:30Z, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 1 03:52:36 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:52:42 localhost systemd[1]: tmp-crun.dLjBUN.mount: Deactivated successfully. Feb 1 03:52:42 localhost podman[101874]: 2026-02-01 08:52:42.864501681 +0000 UTC m=+0.083181786 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step4, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:52:43 localhost podman[101874]: 2026-02-01 08:52:43.231717428 +0000 UTC m=+0.450397513 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com) Feb 1 03:52:43 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:52:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:52:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:52:44 localhost podman[101899]: 2026-02-01 08:52:44.85750274 +0000 UTC m=+0.070636845 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 1 03:52:44 localhost podman[101899]: 2026-02-01 08:52:44.874756208 +0000 UTC m=+0.087890323 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, container_name=ovn_metadata_agent, config_id=tripleo_step4, release=1766032510, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc.) Feb 1 03:52:44 localhost podman[101899]: unhealthy Feb 1 03:52:44 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:52:44 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:52:44 localhost podman[101900]: 2026-02-01 08:52:44.927522464 +0000 UTC m=+0.134639651 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, distribution-scope=public, release=1766032510, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team) Feb 1 03:52:44 localhost podman[101900]: 2026-02-01 08:52:44.970830815 +0000 UTC m=+0.177948052 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:52:44 localhost podman[101900]: unhealthy Feb 1 03:52:44 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:52:44 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:52:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:52:57 localhost systemd[1]: tmp-crun.BfpgYH.mount: Deactivated successfully. Feb 1 03:52:57 localhost podman[101938]: 2026-02-01 08:52:57.880834135 +0000 UTC m=+0.090100952 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:52:58 localhost podman[101938]: 2026-02-01 08:52:58.081597229 +0000 UTC m=+0.290863986 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public) Feb 1 03:52:58 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:53:06 localhost systemd[1]: tmp-crun.ZymYXa.mount: Deactivated successfully. Feb 1 03:53:06 localhost podman[101968]: 2026-02-01 08:53:06.883984399 +0000 UTC m=+0.098153733 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:53:06 localhost systemd[1]: tmp-crun.P4A0YL.mount: Deactivated successfully. Feb 1 03:53:06 localhost podman[101968]: 2026-02-01 08:53:06.899940366 +0000 UTC m=+0.114109800 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, container_name=nova_compute, distribution-scope=public, architecture=x86_64, config_id=tripleo_step5, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 1 03:53:06 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:53:06 localhost podman[101982]: 2026-02-01 08:53:06.932673448 +0000 UTC m=+0.128764429 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:53:06 localhost podman[101988]: 2026-02-01 08:53:06.970752806 +0000 UTC m=+0.166918429 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, container_name=collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git) Feb 1 03:53:06 localhost podman[101981]: 2026-02-01 08:53:06.97632226 +0000 UTC m=+0.176263890 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 1 03:53:06 localhost podman[101988]: 2026-02-01 08:53:06.978470336 +0000 UTC m=+0.174635959 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, architecture=x86_64, release=1766032510, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:53:06 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:53:07 localhost podman[101982]: 2026-02-01 08:53:07.003254639 +0000 UTC m=+0.199345610 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:53:07 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:53:07 localhost podman[101967]: 2026-02-01 08:53:07.04171646 +0000 UTC m=+0.255331207 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 1 03:53:07 localhost podman[101969]: 2026-02-01 08:53:06.901697781 +0000 UTC m=+0.106574646 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step3, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container) Feb 1 03:53:07 localhost podman[101981]: 2026-02-01 08:53:07.071082145 +0000 UTC m=+0.271023765 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 1 03:53:07 localhost podman[101967]: 2026-02-01 08:53:07.078957531 +0000 UTC m=+0.292572328 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Feb 1 03:53:07 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:53:07 localhost podman[101969]: 2026-02-01 08:53:07.086548488 +0000 UTC m=+0.291425323 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:53:07 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:53:07 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:53:13 localhost podman[102234]: 2026-02-01 08:53:13.874029337 +0000 UTC m=+0.088796811 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.buildah.version=1.41.5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:53:14 localhost podman[102234]: 2026-02-01 08:53:14.248730968 +0000 UTC m=+0.463498422 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20260112.1, vcs-type=git) Feb 1 03:53:14 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:53:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:53:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:53:15 localhost podman[102257]: 2026-02-01 08:53:15.87099711 +0000 UTC m=+0.086660075 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 1 03:53:15 localhost podman[102258]: 2026-02-01 08:53:15.914443615 +0000 UTC m=+0.102716815 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 1 03:53:15 localhost podman[102257]: 2026-02-01 08:53:15.955858987 +0000 UTC m=+0.171522022 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:53:15 localhost podman[102257]: unhealthy Feb 1 03:53:15 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:53:15 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:53:16 localhost podman[102258]: 2026-02-01 08:53:16.009130669 +0000 UTC m=+0.197403829 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:53:16 localhost podman[102258]: unhealthy Feb 1 03:53:16 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:53:16 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:53:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:53:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:53:28 localhost recover_tripleo_nova_virtqemud[102303]: 62016 Feb 1 03:53:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:53:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:53:28 localhost systemd[1]: tmp-crun.EaOd1K.mount: Deactivated successfully. Feb 1 03:53:28 localhost podman[102296]: 2026-02-01 08:53:28.875049467 +0000 UTC m=+0.089416271 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git) Feb 1 03:53:29 localhost podman[102296]: 2026-02-01 08:53:29.068713129 +0000 UTC m=+0.283079853 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team) Feb 1 03:53:29 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:53:37 localhost podman[102331]: 2026-02-01 08:53:37.884715954 +0000 UTC m=+0.084904150 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true) Feb 1 03:53:37 localhost systemd[1]: tmp-crun.7UMhZT.mount: Deactivated successfully. Feb 1 03:53:37 localhost podman[102329]: 2026-02-01 08:53:37.939238504 +0000 UTC m=+0.143091215 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step5, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git) Feb 1 03:53:37 localhost podman[102328]: 2026-02-01 08:53:37.988522942 +0000 UTC m=+0.192965971 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, container_name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container) Feb 1 03:53:37 localhost podman[102329]: 2026-02-01 08:53:37.993713444 +0000 UTC m=+0.197566165 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13) Feb 1 03:53:38 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:53:38 localhost podman[102328]: 2026-02-01 08:53:38.021631116 +0000 UTC m=+0.226074125 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:53:38 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:53:38 localhost podman[102333]: 2026-02-01 08:53:38.038841902 +0000 UTC m=+0.233499976 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi) Feb 1 03:53:38 localhost podman[102331]: 2026-02-01 08:53:38.041858946 +0000 UTC m=+0.242047142 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, version=17.1.13, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:53:38 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:53:38 localhost podman[102333]: 2026-02-01 08:53:38.091721952 +0000 UTC m=+0.286380076 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:53:38 localhost podman[102343]: 2026-02-01 08:53:38.101543988 +0000 UTC m=+0.291855146 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step3, version=17.1.13, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:53:38 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:53:38 localhost podman[102343]: 2026-02-01 08:53:38.116519886 +0000 UTC m=+0.306831014 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com) Feb 1 03:53:38 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:53:38 localhost podman[102330]: 2026-02-01 08:53:38.201367113 +0000 UTC m=+0.399216117 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, config_id=tripleo_step3) Feb 1 03:53:38 localhost podman[102330]: 2026-02-01 08:53:38.213636505 +0000 UTC m=+0.411485509 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, distribution-scope=public, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, version=17.1.13, container_name=iscsid, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 1 03:53:38 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:53:44 localhost podman[102461]: 2026-02-01 08:53:44.843546168 +0000 UTC m=+0.058903439 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, version=17.1.13) Feb 1 03:53:45 localhost podman[102461]: 2026-02-01 08:53:45.223691918 +0000 UTC m=+0.439049139 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, managed_by=tripleo_ansible) Feb 1 03:53:45 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:53:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:53:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:53:46 localhost systemd[1]: tmp-crun.DosN8y.mount: Deactivated successfully. Feb 1 03:53:46 localhost podman[102486]: 2026-02-01 08:53:46.867811572 +0000 UTC m=+0.083583308 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, build-date=2026-01-12T22:36:40Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:53:46 localhost systemd[1]: tmp-crun.ffdjGP.mount: Deactivated successfully. Feb 1 03:53:46 localhost podman[102485]: 2026-02-01 08:53:46.922831938 +0000 UTC m=+0.138233713 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, architecture=x86_64, config_id=tripleo_step4) Feb 1 03:53:46 localhost podman[102486]: 2026-02-01 08:53:46.936843336 +0000 UTC m=+0.152615042 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:53:46 localhost podman[102486]: unhealthy Feb 1 03:53:46 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:53:46 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:53:46 localhost podman[102485]: 2026-02-01 08:53:46.967683598 +0000 UTC m=+0.183085403 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5) Feb 1 03:53:46 localhost podman[102485]: unhealthy Feb 1 03:53:46 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:53:46 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:53:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:53:59 localhost podman[102527]: 2026-02-01 08:53:59.856847381 +0000 UTC m=+0.075992502 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_id=tripleo_step1, managed_by=tripleo_ansible, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:54:00 localhost podman[102527]: 2026-02-01 08:54:00.019068932 +0000 UTC m=+0.238214093 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:54:00 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:54:08 localhost systemd[1]: tmp-crun.527m4V.mount: Deactivated successfully. Feb 1 03:54:08 localhost podman[102563]: 2026-02-01 08:54:08.886100257 +0000 UTC m=+0.090457353 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, version=17.1.13, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 1 03:54:08 localhost systemd[1]: tmp-crun.WTTePs.mount: Deactivated successfully. Feb 1 03:54:08 localhost podman[102557]: 2026-02-01 08:54:08.941362282 +0000 UTC m=+0.153493320 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1766032510, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:54:08 localhost podman[102558]: 2026-02-01 08:54:08.957063182 +0000 UTC m=+0.164894716 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git) Feb 1 03:54:08 localhost podman[102557]: 2026-02-01 08:54:08.973522864 +0000 UTC m=+0.185653902 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, config_id=tripleo_step5, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, tcib_managed=true, release=1766032510) Feb 1 03:54:08 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:54:08 localhost podman[102563]: 2026-02-01 08:54:08.987358646 +0000 UTC m=+0.191715742 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:54:08 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:54:09 localhost podman[102558]: 2026-02-01 08:54:09.040506904 +0000 UTC m=+0.248338488 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, release=1766032510, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 1 03:54:09 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:54:09 localhost podman[102576]: 2026-02-01 08:54:09.041722123 +0000 UTC m=+0.236756028 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3) Feb 1 03:54:09 localhost podman[102556]: 2026-02-01 08:54:09.143343813 +0000 UTC m=+0.357186384 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc.) Feb 1 03:54:09 localhost podman[102556]: 2026-02-01 08:54:09.154498981 +0000 UTC m=+0.368341572 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z) Feb 1 03:54:09 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:54:09 localhost podman[102576]: 2026-02-01 08:54:09.175510197 +0000 UTC m=+0.370544172 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:54:09 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:54:09 localhost podman[102570]: 2026-02-01 08:54:09.095048447 +0000 UTC m=+0.295047137 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi) Feb 1 03:54:09 localhost podman[102570]: 2026-02-01 08:54:09.227814348 +0000 UTC m=+0.427812998 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 1 03:54:09 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:54:09 localhost systemd[1]: tmp-crun.MXBKg7.mount: Deactivated successfully. Feb 1 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:54:15 localhost podman[102771]: 2026-02-01 08:54:15.834161775 +0000 UTC m=+0.051847218 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step4) Feb 1 03:54:16 localhost podman[102771]: 2026-02-01 08:54:16.23691027 +0000 UTC m=+0.454595723 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, tcib_managed=true, release=1766032510) Feb 1 03:54:16 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:54:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:54:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:54:17 localhost podman[102795]: 2026-02-01 08:54:17.879964882 +0000 UTC m=+0.083883318 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_metadata_agent, config_id=tripleo_step4, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:54:17 localhost podman[102796]: 2026-02-01 08:54:17.932009475 +0000 UTC m=+0.132360940 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:54:17 localhost podman[102795]: 2026-02-01 08:54:17.951208734 +0000 UTC m=+0.155127180 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:54:17 localhost podman[102796]: 2026-02-01 08:54:17.951768621 +0000 UTC m=+0.152120056 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, build-date=2026-01-12T22:36:40Z) Feb 1 03:54:17 localhost podman[102795]: unhealthy Feb 1 03:54:17 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:54:17 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:54:18 localhost podman[102796]: unhealthy Feb 1 03:54:18 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:54:18 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:54:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:54:30 localhost systemd[1]: tmp-crun.78x7K7.mount: Deactivated successfully. Feb 1 03:54:30 localhost podman[102834]: 2026-02-01 08:54:30.872743116 +0000 UTC m=+0.083714053 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:54:31 localhost podman[102834]: 2026-02-01 08:54:31.069738671 +0000 UTC m=+0.280709608 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 1 03:54:31 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:54:39 localhost systemd[1]: tmp-crun.I7yv3U.mount: Deactivated successfully. Feb 1 03:54:39 localhost podman[102864]: 2026-02-01 08:54:39.877727804 +0000 UTC m=+0.095645304 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1) Feb 1 03:54:39 localhost podman[102866]: 2026-02-01 08:54:39.938772669 +0000 UTC m=+0.146572093 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=iscsid, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:54:39 localhost podman[102866]: 2026-02-01 08:54:39.95068363 +0000 UTC m=+0.158483064 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 1 03:54:39 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:54:39 localhost podman[102864]: 2026-02-01 08:54:39.966070111 +0000 UTC m=+0.183987611 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1) Feb 1 03:54:39 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:54:40 localhost podman[102878]: 2026-02-01 08:54:40.038249602 +0000 UTC m=+0.241543776 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13) Feb 1 03:54:40 localhost podman[102878]: 2026-02-01 08:54:40.089501592 +0000 UTC m=+0.292795756 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, release=1766032510, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:54:40 localhost podman[102865]: 2026-02-01 08:54:40.089471091 +0000 UTC m=+0.302139698 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.expose-services=) Feb 1 03:54:40 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Deactivated successfully. Feb 1 03:54:40 localhost podman[102872]: 2026-02-01 08:54:40.143682282 +0000 UTC m=+0.349339750 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13) Feb 1 03:54:40 localhost podman[102872]: 2026-02-01 08:54:40.169764555 +0000 UTC m=+0.375422043 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, architecture=x86_64, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:54:40 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:54:40 localhost podman[102865]: 2026-02-01 08:54:40.218460895 +0000 UTC m=+0.431129992 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-01-12T23:32:04Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible) Feb 1 03:54:40 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Deactivated successfully. Feb 1 03:54:40 localhost podman[102879]: 2026-02-01 08:54:40.293432444 +0000 UTC m=+0.491322360 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, vcs-type=git, version=17.1.13, container_name=collectd, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:54:40 localhost podman[102879]: 2026-02-01 08:54:40.329637873 +0000 UTC m=+0.527527829 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_id=tripleo_step3, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible) Feb 1 03:54:40 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:54:46 localhost podman[102998]: 2026-02-01 08:54:46.866303837 +0000 UTC m=+0.080374919 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public) Feb 1 03:54:47 localhost podman[102998]: 2026-02-01 08:54:47.189702326 +0000 UTC m=+0.403773418 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:54:47 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:54:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:54:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:54:48 localhost podman[103021]: 2026-02-01 08:54:48.869598426 +0000 UTC m=+0.084794356 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ovn_metadata_agent, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510) Feb 1 03:54:48 localhost podman[103021]: 2026-02-01 08:54:48.885655707 +0000 UTC m=+0.100851617 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent) Feb 1 03:54:48 localhost podman[103021]: unhealthy Feb 1 03:54:48 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:54:48 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:54:48 localhost podman[103022]: 2026-02-01 08:54:48.972200338 +0000 UTC m=+0.183831997 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:54:48 localhost podman[103022]: 2026-02-01 08:54:48.989036113 +0000 UTC m=+0.200667772 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:54:48 localhost podman[103022]: unhealthy Feb 1 03:54:49 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:54:49 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:55:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:55:01 localhost podman[103061]: 2026-02-01 08:55:01.842201872 +0000 UTC m=+0.065147795 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, release=1766032510, architecture=x86_64, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:55:02 localhost podman[103061]: 2026-02-01 08:55:02.045943277 +0000 UTC m=+0.268889150 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, container_name=metrics_qdr, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z) Feb 1 03:55:02 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:55:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:55:10 localhost recover_tripleo_nova_virtqemud[103124]: 62016 Feb 1 03:55:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:55:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:55:10 localhost podman[103089]: 2026-02-01 08:55:10.874524673 +0000 UTC m=+0.083945810 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:55:10 localhost podman[103106]: 2026-02-01 08:55:10.888485339 +0000 UTC m=+0.078263523 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, distribution-scope=public, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:55:10 localhost podman[103088]: 2026-02-01 08:55:10.922018165 +0000 UTC m=+0.133494226 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public) Feb 1 03:55:10 localhost podman[103091]: 2026-02-01 08:55:10.944969851 +0000 UTC m=+0.145812960 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 1 03:55:10 localhost podman[103091]: 2026-02-01 08:55:10.971626803 +0000 UTC m=+0.172469842 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5) Feb 1 03:55:10 localhost podman[103110]: 2026-02-01 08:55:10.984403121 +0000 UTC m=+0.175002110 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z) Feb 1 03:55:11 localhost podman[103088]: 2026-02-01 08:55:11.004695274 +0000 UTC m=+0.216171385 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=logrotate_crond, batch=17.1_20260112.1, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron) Feb 1 03:55:11 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:55:11 localhost podman[103110]: 2026-02-01 08:55:11.017778763 +0000 UTC m=+0.208377782 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:55:11 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:55:11 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:55:11 localhost podman[103106]: 2026-02-01 08:55:11.060476074 +0000 UTC m=+0.250254278 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510) Feb 1 03:55:11 localhost podman[103106]: unhealthy Feb 1 03:55:11 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:55:11 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed with result 'exit-code'. Feb 1 03:55:11 localhost podman[103090]: 2026-02-01 08:55:11.144510586 +0000 UTC m=+0.355043227 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, version=17.1.13, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, vcs-type=git, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid) Feb 1 03:55:11 localhost podman[103089]: 2026-02-01 08:55:11.169932269 +0000 UTC m=+0.379353456 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:55:11 localhost podman[103089]: unhealthy Feb 1 03:55:11 localhost podman[103090]: 2026-02-01 08:55:11.17954681 +0000 UTC m=+0.390079491 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=iscsid, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-iscsid-container) Feb 1 03:55:11 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:55:11 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:55:11 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:55:15 localhost podman[103327]: 2026-02-01 08:55:15.330881023 +0000 UTC m=+0.092754254 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-12-08T17:28:53Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Feb 1 03:55:15 localhost podman[103327]: 2026-02-01 08:55:15.436072925 +0000 UTC m=+0.197946186 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, release=1764794109, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, version=7, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Feb 1 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:55:17 localhost systemd[1]: tmp-crun.qz6gQd.mount: Deactivated successfully. Feb 1 03:55:17 localhost podman[103471]: 2026-02-01 08:55:17.343769982 +0000 UTC m=+0.093145027 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-type=git, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:55:17 localhost podman[103471]: 2026-02-01 08:55:17.723888811 +0000 UTC m=+0.473263876 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Feb 1 03:55:17 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:55:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:55:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:55:19 localhost podman[103493]: 2026-02-01 08:55:19.861047237 +0000 UTC m=+0.078143199 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.13, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:55:19 localhost podman[103493]: 2026-02-01 08:55:19.878748759 +0000 UTC m=+0.095844711 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, build-date=2026-01-12T22:56:19Z, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 1 03:55:19 localhost podman[103493]: unhealthy Feb 1 03:55:19 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:55:19 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:55:19 localhost podman[103494]: 2026-02-01 08:55:19.923737793 +0000 UTC m=+0.140290458 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1766032510) Feb 1 03:55:19 localhost podman[103494]: 2026-02-01 08:55:19.967708855 +0000 UTC m=+0.184261540 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:55:19 localhost podman[103494]: unhealthy Feb 1 03:55:19 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:55:19 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:55:23 localhost sshd[103534]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:55:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:55:32 localhost systemd[1]: tmp-crun.b2wV68.mount: Deactivated successfully. Feb 1 03:55:32 localhost podman[103536]: 2026-02-01 08:55:32.879899603 +0000 UTC m=+0.093532759 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, container_name=metrics_qdr, architecture=x86_64, batch=17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:55:33 localhost podman[103536]: 2026-02-01 08:55:33.09809309 +0000 UTC m=+0.311726256 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:55:33 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:55:41 localhost systemd[1]: tmp-crun.dPT9Ea.mount: Deactivated successfully. Feb 1 03:55:41 localhost podman[103566]: 2026-02-01 08:55:41.940705425 +0000 UTC m=+0.142853688 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, distribution-scope=public, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, vendor=Red Hat, Inc.) Feb 1 03:55:41 localhost podman[103566]: 2026-02-01 08:55:41.958540642 +0000 UTC m=+0.160688915 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64) Feb 1 03:55:41 localhost podman[103566]: unhealthy Feb 1 03:55:41 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:55:41 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:55:42 localhost podman[103567]: 2026-02-01 08:55:42.036208255 +0000 UTC m=+0.236496610 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3) Feb 1 03:55:42 localhost podman[103567]: 2026-02-01 08:55:42.067443429 +0000 UTC m=+0.267731794 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 1 03:55:42 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:55:42 localhost podman[103574]: 2026-02-01 08:55:42.149269702 +0000 UTC m=+0.341504056 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, distribution-scope=public, release=1766032510, container_name=ceilometer_agent_ipmi, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.) Feb 1 03:55:42 localhost podman[103585]: 2026-02-01 08:55:41.916362666 +0000 UTC m=+0.109218499 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:55:42 localhost podman[103565]: 2026-02-01 08:55:42.195810894 +0000 UTC m=+0.400295799 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, config_id=tripleo_step4, release=1766032510, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z) Feb 1 03:55:42 localhost podman[103565]: 2026-02-01 08:55:42.209569843 +0000 UTC m=+0.414054728 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.component=openstack-cron-container, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com) Feb 1 03:55:42 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:55:42 localhost podman[103574]: 2026-02-01 08:55:42.232357124 +0000 UTC m=+0.424591518 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1766032510, version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 1 03:55:42 localhost podman[103574]: unhealthy Feb 1 03:55:42 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:55:42 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed with result 'exit-code'. Feb 1 03:55:42 localhost podman[103585]: 2026-02-01 08:55:42.256853318 +0000 UTC m=+0.449709181 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Feb 1 03:55:42 localhost podman[103568]: 2026-02-01 08:55:42.213916619 +0000 UTC m=+0.408069623 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:55:42 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:55:42 localhost podman[103568]: 2026-02-01 08:55:42.296699802 +0000 UTC m=+0.490852816 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64) Feb 1 03:55:42 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:55:42 localhost systemd[1]: tmp-crun.zvWTnb.mount: Deactivated successfully. Feb 1 03:55:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:55:47 localhost podman[103696]: 2026-02-01 08:55:47.861032589 +0000 UTC m=+0.076587061 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.5, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Feb 1 03:55:48 localhost podman[103696]: 2026-02-01 08:55:48.268179811 +0000 UTC m=+0.483734263 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64) Feb 1 03:55:48 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:55:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:55:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:55:50 localhost podman[103719]: 2026-02-01 08:55:50.872038097 +0000 UTC m=+0.086325484 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:55:50 localhost podman[103719]: 2026-02-01 08:55:50.887819409 +0000 UTC m=+0.102106836 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 03:55:50 localhost podman[103719]: unhealthy Feb 1 03:55:50 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:55:50 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:55:50 localhost systemd[1]: tmp-crun.1rvglo.mount: Deactivated successfully. Feb 1 03:55:50 localhost podman[103720]: 2026-02-01 08:55:50.98176195 +0000 UTC m=+0.191586899 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Feb 1 03:55:51 localhost podman[103720]: 2026-02-01 08:55:51.024816363 +0000 UTC m=+0.234641262 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:55:51 localhost podman[103720]: unhealthy Feb 1 03:55:51 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:55:51 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:55:57 localhost sshd[103759]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:55:57 localhost systemd-logind[761]: New session 35 of user zuul. Feb 1 03:55:57 localhost systemd[1]: Started Session 35 of User zuul. Feb 1 03:55:58 localhost python3.9[103854]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 03:55:59 localhost python3.9[103948]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 03:56:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63626 DF PROTO=TCP SPT=47630 DPT=9882 SEQ=2619242476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643C86C0000000001030307) Feb 1 03:56:00 localhost python3.9[104041]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 03:56:00 localhost python3.9[104135]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 03:56:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63627 DF PROTO=TCP SPT=47630 DPT=9882 SEQ=2619242476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643CC8E0000000001030307) Feb 1 03:56:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7398 DF PROTO=TCP SPT=59308 DPT=9105 SEQ=3528091407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643CE750000000001030307) Feb 1 03:56:01 localhost python3.9[104228]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 03:56:02 localhost python3.9[104319]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Feb 1 03:56:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7399 DF PROTO=TCP SPT=59308 DPT=9105 SEQ=3528091407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643D28D0000000001030307) Feb 1 03:56:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63628 DF PROTO=TCP SPT=47630 DPT=9882 SEQ=2619242476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643D48D0000000001030307) Feb 1 03:56:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48511 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=1568214925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643D6000000000001030307) Feb 1 03:56:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:56:03 localhost podman[104379]: 2026-02-01 08:56:03.873495939 +0000 UTC m=+0.080678028 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=metrics_qdr, io.buildah.version=1.41.5) Feb 1 03:56:04 localhost python3.9[104421]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 03:56:04 localhost podman[104379]: 2026-02-01 08:56:04.095721142 +0000 UTC m=+0.302903251 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.openshift.expose-services=, version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510) Feb 1 03:56:04 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:56:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48512 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=1568214925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643DA0E0000000001030307) Feb 1 03:56:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7400 DF PROTO=TCP SPT=59308 DPT=9105 SEQ=3528091407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643DA8D0000000001030307) Feb 1 03:56:04 localhost python3.9[104532]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Feb 1 03:56:05 localhost python3.9[104622]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 03:56:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48513 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=1568214925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643E20D0000000001030307) Feb 1 03:56:06 localhost python3.9[104670]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 03:56:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31412 DF PROTO=TCP SPT=34918 DPT=9100 SEQ=1191802207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643E27B0000000001030307) Feb 1 03:56:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63629 DF PROTO=TCP SPT=47630 DPT=9882 SEQ=2619242476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643E44D0000000001030307) Feb 1 03:56:07 localhost systemd-logind[761]: Session 35 logged out. Waiting for processes to exit. Feb 1 03:56:07 localhost systemd[1]: session-35.scope: Deactivated successfully. Feb 1 03:56:07 localhost systemd[1]: session-35.scope: Consumed 4.728s CPU time. Feb 1 03:56:07 localhost systemd-logind[761]: Removed session 35. Feb 1 03:56:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31413 DF PROTO=TCP SPT=34918 DPT=9100 SEQ=1191802207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643E68D0000000001030307) Feb 1 03:56:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7401 DF PROTO=TCP SPT=59308 DPT=9105 SEQ=3528091407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643EA4E0000000001030307) Feb 1 03:56:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31414 DF PROTO=TCP SPT=34918 DPT=9100 SEQ=1191802207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643EE8D0000000001030307) Feb 1 03:56:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54381 DF PROTO=TCP SPT=50200 DPT=9101 SEQ=1738375931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643EF9F0000000001030307) Feb 1 03:56:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48514 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=1568214925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643F1CD0000000001030307) Feb 1 03:56:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54382 DF PROTO=TCP SPT=50200 DPT=9101 SEQ=1738375931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643F38D0000000001030307) Feb 1 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:56:12 localhost systemd[1]: tmp-crun.9S1I6G.mount: Deactivated successfully. Feb 1 03:56:12 localhost podman[104686]: 2026-02-01 08:56:12.884742105 +0000 UTC m=+0.097746651 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vcs-type=git, name=rhosp-rhel9/openstack-cron) Feb 1 03:56:12 localhost podman[104687]: 2026-02-01 08:56:12.900038732 +0000 UTC m=+0.106131202 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, distribution-scope=public) Feb 1 03:56:12 localhost podman[104687]: 2026-02-01 08:56:12.916704492 +0000 UTC m=+0.122796922 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, distribution-scope=public, vcs-type=git) Feb 1 03:56:12 localhost podman[104687]: unhealthy Feb 1 03:56:12 localhost podman[104686]: 2026-02-01 08:56:12.922661678 +0000 UTC m=+0.135666254 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5) Feb 1 03:56:12 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:56:12 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:56:12 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:56:12 localhost podman[104695]: 2026-02-01 08:56:12.996959656 +0000 UTC m=+0.191075902 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 1 03:56:13 localhost podman[104694]: 2026-02-01 08:56:13.041620739 +0000 UTC m=+0.243262680 container health_status 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z) Feb 1 03:56:13 localhost podman[104695]: 2026-02-01 08:56:13.0438801 +0000 UTC m=+0.237996366 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, version=17.1.13, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1766032510, distribution-scope=public) Feb 1 03:56:13 localhost podman[104705]: 2026-02-01 08:56:13.052472878 +0000 UTC m=+0.244644183 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:56:13 localhost podman[104705]: 2026-02-01 08:56:13.06312992 +0000 UTC m=+0.255301255 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, release=1766032510, container_name=collectd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3) Feb 1 03:56:13 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:56:13 localhost podman[104695]: unhealthy Feb 1 03:56:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54383 DF PROTO=TCP SPT=50200 DPT=9101 SEQ=1738375931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643FB8D0000000001030307) Feb 1 03:56:13 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:56:13 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed with result 'exit-code'. Feb 1 03:56:13 localhost podman[104694]: 2026-02-01 08:56:13.121584374 +0000 UTC m=+0.323226335 container exec_died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 1 03:56:13 localhost podman[104688]: 2026-02-01 08:56:13.151025852 +0000 UTC m=+0.355206122 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid) Feb 1 03:56:13 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Deactivated successfully. Feb 1 03:56:13 localhost podman[104688]: 2026-02-01 08:56:13.190554826 +0000 UTC m=+0.394735106 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, container_name=iscsid, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:56:13 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:56:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31415 DF PROTO=TCP SPT=34918 DPT=9100 SEQ=1191802207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA643FE4D0000000001030307) Feb 1 03:56:13 localhost systemd[1]: tmp-crun.xSNd8v.mount: Deactivated successfully. Feb 1 03:56:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63630 DF PROTO=TCP SPT=47630 DPT=9882 SEQ=2619242476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644050E0000000001030307) Feb 1 03:56:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:56:18 localhost podman[104875]: 2026-02-01 08:56:18.861000853 +0000 UTC m=+0.074806245 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:56:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48515 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=1568214925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644130D0000000001030307) Feb 1 03:56:19 localhost podman[104875]: 2026-02-01 08:56:19.220110697 +0000 UTC m=+0.433916119 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, container_name=nova_migration_target, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:56:19 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:56:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:56:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:56:21 localhost systemd[1]: tmp-crun.4Fi5UU.mount: Deactivated successfully. Feb 1 03:56:21 localhost podman[104915]: 2026-02-01 08:56:21.862060631 +0000 UTC m=+0.077683745 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z) Feb 1 03:56:21 localhost podman[104915]: 2026-02-01 08:56:21.880784535 +0000 UTC m=+0.096407679 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.13, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, tcib_managed=true) Feb 1 03:56:21 localhost podman[104915]: unhealthy Feb 1 03:56:21 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:56:21 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:56:21 localhost systemd[1]: tmp-crun.ejFTCy.mount: Deactivated successfully. Feb 1 03:56:21 localhost podman[104914]: 2026-02-01 08:56:21.971278928 +0000 UTC m=+0.185306632 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, version=17.1.13, release=1766032510, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:56:21 localhost podman[104914]: 2026-02-01 08:56:21.991616852 +0000 UTC m=+0.205644536 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.13) Feb 1 03:56:21 localhost podman[104914]: unhealthy Feb 1 03:56:22 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:56:22 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:56:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31416 DF PROTO=TCP SPT=34918 DPT=9100 SEQ=1191802207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6441F0D0000000001030307) Feb 1 03:56:22 localhost sshd[104956]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:56:22 localhost systemd-logind[761]: New session 36 of user zuul. Feb 1 03:56:22 localhost systemd[1]: Started Session 36 of User zuul. Feb 1 03:56:23 localhost python3.9[105051]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 03:56:23 localhost systemd[1]: Reloading. Feb 1 03:56:23 localhost systemd-sysv-generator[105078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:56:23 localhost systemd-rc-local-generator[105072]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:56:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:56:23 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:56:23 localhost recover_tripleo_nova_virtqemud[105089]: 62016 Feb 1 03:56:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:56:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:56:24 localhost python3.9[105179]: ansible-ansible.builtin.service_facts Invoked Feb 1 03:56:24 localhost network[105196]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 03:56:24 localhost network[105197]: 'network-scripts' will be removed from distribution in near future. Feb 1 03:56:24 localhost network[105198]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 03:56:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54385 DF PROTO=TCP SPT=50200 DPT=9101 SEQ=1738375931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6442B0D0000000001030307) Feb 1 03:56:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:56:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20731 DF PROTO=TCP SPT=45992 DPT=9882 SEQ=1838796460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6443D9D0000000001030307) Feb 1 03:56:30 localhost python3.9[105397]: ansible-ansible.builtin.service_facts Invoked Feb 1 03:56:30 localhost network[105414]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 03:56:30 localhost network[105415]: 'network-scripts' will be removed from distribution in near future. Feb 1 03:56:30 localhost network[105416]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 03:56:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20732 DF PROTO=TCP SPT=45992 DPT=9882 SEQ=1838796460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644418E0000000001030307) Feb 1 03:56:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:56:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20733 DF PROTO=TCP SPT=45992 DPT=9882 SEQ=1838796460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644498D0000000001030307) Feb 1 03:56:34 localhost python3.9[105616]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:56:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:56:34 localhost systemd[1]: Reloading. Feb 1 03:56:34 localhost podman[105618]: 2026-02-01 08:56:34.456346261 +0000 UTC m=+0.114473202 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, config_id=tripleo_step1, batch=17.1_20260112.1, managed_by=tripleo_ansible) Feb 1 03:56:34 localhost systemd-sysv-generator[105667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:56:34 localhost systemd-rc-local-generator[105663]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:56:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:56:34 localhost podman[105618]: 2026-02-01 08:56:34.62941627 +0000 UTC m=+0.287543141 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:56:34 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:56:34 localhost systemd[1]: Stopping ceilometer_agent_compute container... Feb 1 03:56:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17861 DF PROTO=TCP SPT=45846 DPT=9102 SEQ=885822400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644574D0000000001030307) Feb 1 03:56:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48224 DF PROTO=TCP SPT=59532 DPT=9100 SEQ=2392031059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64463CD0000000001030307) Feb 1 03:56:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40750 DF PROTO=TCP SPT=49748 DPT=9101 SEQ=534350193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64470CD0000000001030307) Feb 1 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:56:43 localhost podman[105699]: 2026-02-01 08:56:43.391013847 +0000 UTC m=+0.100013321 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, version=17.1.13, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510) Feb 1 03:56:43 localhost podman[105699]: 2026-02-01 08:56:43.397394826 +0000 UTC m=+0.106394230 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, release=1766032510, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, version=17.1.13, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, architecture=x86_64) Feb 1 03:56:43 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:56:43 localhost podman[105701]: 2026-02-01 08:56:43.437525729 +0000 UTC m=+0.141624180 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Feb 1 03:56:43 localhost podman[105701]: 2026-02-01 08:56:43.44301411 +0000 UTC m=+0.147112531 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, release=1766032510, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 1 03:56:43 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:56:43 localhost podman[105721]: 2026-02-01 08:56:43.493196276 +0000 UTC m=+0.187520132 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, release=1766032510, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.13, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 1 03:56:43 localhost podman[105721]: 2026-02-01 08:56:43.501430312 +0000 UTC m=+0.195754218 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=collectd, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 1 03:56:43 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:56:43 localhost podman[105702]: Error: container 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 is not running Feb 1 03:56:43 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Main process exited, code=exited, status=125/n/a Feb 1 03:56:43 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Failed with result 'exit-code'. Feb 1 03:56:43 localhost podman[105700]: 2026-02-01 08:56:43.539069056 +0000 UTC m=+0.245423688 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 1 03:56:43 localhost podman[105700]: 2026-02-01 08:56:43.558319687 +0000 UTC m=+0.264674329 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Feb 1 03:56:43 localhost podman[105700]: unhealthy Feb 1 03:56:43 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:56:43 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:56:43 localhost podman[105708]: 2026-02-01 08:56:43.602070861 +0000 UTC m=+0.299056400 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., tcib_managed=true, build-date=2026-01-12T23:07:30Z, version=17.1.13, config_id=tripleo_step4, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:56:43 localhost podman[105708]: 2026-02-01 08:56:43.628018032 +0000 UTC m=+0.325003571 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:56:43 localhost podman[105708]: unhealthy Feb 1 03:56:43 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:56:43 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed with result 'exit-code'. Feb 1 03:56:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20735 DF PROTO=TCP SPT=45992 DPT=9882 SEQ=1838796460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644790E0000000001030307) Feb 1 03:56:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17863 DF PROTO=TCP SPT=45846 DPT=9102 SEQ=885822400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644870D0000000001030307) Feb 1 03:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:56:49 localhost systemd[1]: tmp-crun.8PCscJ.mount: Deactivated successfully. Feb 1 03:56:49 localhost podman[105822]: 2026-02-01 08:56:49.625517344 +0000 UTC m=+0.094442778 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step4) Feb 1 03:56:49 localhost podman[105822]: 2026-02-01 08:56:49.989024514 +0000 UTC m=+0.457949958 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true) Feb 1 03:56:50 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:56:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48226 DF PROTO=TCP SPT=59532 DPT=9100 SEQ=2392031059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644930E0000000001030307) Feb 1 03:56:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:56:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:56:52 localhost podman[105845]: 2026-02-01 08:56:52.372115353 +0000 UTC m=+0.087079368 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2026-01-12T22:56:19Z, version=17.1.13, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 1 03:56:52 localhost podman[105846]: 2026-02-01 08:56:52.427655866 +0000 UTC m=+0.138861234 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, distribution-scope=public, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 1 03:56:52 localhost podman[105845]: 2026-02-01 08:56:52.441475957 +0000 UTC m=+0.156440032 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Feb 1 03:56:52 localhost podman[105845]: unhealthy Feb 1 03:56:52 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:56:52 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:56:52 localhost podman[105846]: 2026-02-01 08:56:52.466930791 +0000 UTC m=+0.178136119 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, vcs-type=git) Feb 1 03:56:52 localhost podman[105846]: unhealthy Feb 1 03:56:52 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:56:52 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:56:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40752 DF PROTO=TCP SPT=49748 DPT=9101 SEQ=534350193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644A10E0000000001030307) Feb 1 03:57:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40398 DF PROTO=TCP SPT=47730 DPT=9882 SEQ=2753011288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644B2CC0000000001030307) Feb 1 03:57:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40399 DF PROTO=TCP SPT=47730 DPT=9882 SEQ=2753011288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644B6CD0000000001030307) Feb 1 03:57:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40400 DF PROTO=TCP SPT=47730 DPT=9882 SEQ=2753011288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644BECD0000000001030307) Feb 1 03:57:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:57:04 localhost podman[105882]: 2026-02-01 08:57:04.854436783 +0000 UTC m=+0.075296881 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:57:05 localhost podman[105882]: 2026-02-01 08:57:05.070565605 +0000 UTC m=+0.291425703 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1) Feb 1 03:57:05 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:57:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44258 DF PROTO=TCP SPT=32870 DPT=9102 SEQ=3247779277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644CC8D0000000001030307) Feb 1 03:57:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22093 DF PROTO=TCP SPT=39610 DPT=9100 SEQ=778640740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644D8CD0000000001030307) Feb 1 03:57:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17493 DF PROTO=TCP SPT=59604 DPT=9101 SEQ=2326194972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644E60E0000000001030307) Feb 1 03:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:57:13 localhost podman[105911]: 2026-02-01 08:57:13.621006426 +0000 UTC m=+0.082650449 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=) Feb 1 03:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:57:13 localhost podman[105911]: 2026-02-01 08:57:13.639482573 +0000 UTC m=+0.101126546 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, architecture=x86_64) Feb 1 03:57:13 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:57:13 localhost systemd[1]: tmp-crun.nQEEt4.mount: Deactivated successfully. Feb 1 03:57:13 localhost podman[105913]: 2026-02-01 08:57:13.701954502 +0000 UTC m=+0.154937815 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:57:13 localhost podman[105954]: Error: container 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 is not running Feb 1 03:57:13 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Main process exited, code=exited, status=125/n/a Feb 1 03:57:13 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Failed with result 'exit-code'. Feb 1 03:57:13 localhost podman[105913]: 2026-02-01 08:57:13.785743346 +0000 UTC m=+0.238726689 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_id=tripleo_step3, container_name=collectd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd) Feb 1 03:57:13 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:57:13 localhost podman[105953]: 2026-02-01 08:57:13.838279145 +0000 UTC m=+0.193659832 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, container_name=nova_compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public) Feb 1 03:57:13 localhost podman[105965]: 2026-02-01 08:57:13.767128245 +0000 UTC m=+0.089091401 container health_status 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, release=1766032510, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 1 03:57:13 localhost podman[105912]: 2026-02-01 08:57:13.739553004 +0000 UTC m=+0.196709757 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T22:34:43Z, container_name=iscsid, version=17.1.13, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.buildah.version=1.41.5) Feb 1 03:57:13 localhost podman[105953]: 2026-02-01 08:57:13.897033348 +0000 UTC m=+0.252414005 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:57:13 localhost podman[105953]: unhealthy Feb 1 03:57:13 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:57:13 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:57:13 localhost podman[105912]: 2026-02-01 08:57:13.923761362 +0000 UTC m=+0.380918085 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, vendor=Red Hat, Inc., container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, version=17.1.13, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Feb 1 03:57:13 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:57:13 localhost podman[105965]: 2026-02-01 08:57:13.948002508 +0000 UTC m=+0.269965704 container exec_died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 1 03:57:13 localhost podman[105965]: unhealthy Feb 1 03:57:13 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:57:13 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed with result 'exit-code'. Feb 1 03:57:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40402 DF PROTO=TCP SPT=47730 DPT=9882 SEQ=2753011288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA644EF0D0000000001030307) Feb 1 03:57:16 localhost podman[105685]: time="2026-02-01T08:57:16Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Feb 1 03:57:16 localhost systemd[1]: libpod-35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.scope: Deactivated successfully. Feb 1 03:57:16 localhost systemd[1]: libpod-35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.scope: Consumed 5.611s CPU time. Feb 1 03:57:16 localhost podman[105685]: 2026-02-01 08:57:16.815042265 +0000 UTC m=+42.092407497 container stop 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:57:16 localhost podman[105685]: 2026-02-01 08:57:16.843921566 +0000 UTC m=+42.121286868 container died 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.13, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 1 03:57:16 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.timer: Deactivated successfully. Feb 1 03:57:16 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9. Feb 1 03:57:16 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Failed to open /run/systemd/transient/35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: No such file or directory Feb 1 03:57:16 localhost systemd[1]: tmp-crun.Sws7Xe.mount: Deactivated successfully. Feb 1 03:57:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9-userdata-shm.mount: Deactivated successfully. Feb 1 03:57:16 localhost systemd[1]: var-lib-containers-storage-overlay-e8a1138cbb1c83236f4de65652beadb5bc0b1f3b8c525083bd1db3fda89ebbe0-merged.mount: Deactivated successfully. Feb 1 03:57:16 localhost podman[105685]: 2026-02-01 08:57:16.955821227 +0000 UTC m=+42.233186409 container cleanup 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true) Feb 1 03:57:16 localhost podman[105685]: ceilometer_agent_compute Feb 1 03:57:16 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.timer: Failed to open /run/systemd/transient/35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.timer: No such file or directory Feb 1 03:57:16 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Failed to open /run/systemd/transient/35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: No such file or directory Feb 1 03:57:16 localhost podman[106030]: 2026-02-01 08:57:16.973625732 +0000 UTC m=+0.140945588 container cleanup 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc.) Feb 1 03:57:16 localhost systemd[1]: libpod-conmon-35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.scope: Deactivated successfully. Feb 1 03:57:17 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.timer: Failed to open /run/systemd/transient/35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.timer: No such file or directory Feb 1 03:57:17 localhost systemd[1]: 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: Failed to open /run/systemd/transient/35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9.service: No such file or directory Feb 1 03:57:17 localhost podman[106044]: 2026-02-01 08:57:17.072244449 +0000 UTC m=+0.066043812 container cleanup 35d058b219b984b777df5c2dfe4ed6791ef0c95fff9166d87128d4b7e0ad7ea9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, release=1766032510, url=https://www.redhat.com, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=) Feb 1 03:57:17 localhost podman[106044]: ceilometer_agent_compute Feb 1 03:57:17 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Feb 1 03:57:17 localhost systemd[1]: Stopped ceilometer_agent_compute container. Feb 1 03:57:17 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.068s CPU time, no IO. Feb 1 03:57:17 localhost python3.9[106148]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:57:17 localhost systemd[1]: Reloading. Feb 1 03:57:17 localhost systemd-rc-local-generator[106172]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:57:17 localhost systemd-sysv-generator[106176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:57:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:57:18 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Feb 1 03:57:18 localhost systemd[1]: tmp-crun.XEAlK8.mount: Deactivated successfully. Feb 1 03:57:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:a1:06:ee MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.108 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=46032 SEQ=464998252 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Feb 1 03:57:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:57:20 localhost systemd[1]: tmp-crun.Kkj144.mount: Deactivated successfully. Feb 1 03:57:20 localhost podman[106263]: 2026-02-01 08:57:20.37371711 +0000 UTC m=+0.087368688 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 03:57:20 localhost podman[106263]: 2026-02-01 08:57:20.742906347 +0000 UTC m=+0.456557935 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vcs-type=git, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc.) Feb 1 03:57:20 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:57:21 localhost sshd[106302]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:57:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22095 DF PROTO=TCP SPT=39610 DPT=9100 SEQ=778640740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645090E0000000001030307) Feb 1 03:57:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:57:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:57:22 localhost systemd[1]: tmp-crun.64nsJU.mount: Deactivated successfully. Feb 1 03:57:22 localhost podman[106304]: 2026-02-01 08:57:22.617434429 +0000 UTC m=+0.078560052 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com) Feb 1 03:57:22 localhost podman[106304]: 2026-02-01 08:57:22.632649745 +0000 UTC m=+0.093775368 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 1 03:57:22 localhost podman[106304]: unhealthy Feb 1 03:57:22 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:57:22 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:57:22 localhost podman[106305]: 2026-02-01 08:57:22.718810233 +0000 UTC m=+0.174808956 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, release=1766032510, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc.) Feb 1 03:57:22 localhost podman[106305]: 2026-02-01 08:57:22.759085449 +0000 UTC m=+0.215084162 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:57:22 localhost podman[106305]: unhealthy Feb 1 03:57:22 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:57:22 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:57:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17495 DF PROTO=TCP SPT=59604 DPT=9101 SEQ=2326194972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645170D0000000001030307) Feb 1 03:57:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22113 DF PROTO=TCP SPT=51962 DPT=9882 SEQ=2863784275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64527FC0000000001030307) Feb 1 03:57:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:a1:06:ee MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.108 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=46032 SEQ=464998252 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Feb 1 03:57:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22115 DF PROTO=TCP SPT=51962 DPT=9882 SEQ=2863784275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645340D0000000001030307) Feb 1 03:57:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:57:35 localhost podman[106344]: 2026-02-01 08:57:35.363779333 +0000 UTC m=+0.080249954 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 1 03:57:35 localhost podman[106344]: 2026-02-01 08:57:35.586781451 +0000 UTC m=+0.303252052 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, distribution-scope=public, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr) Feb 1 03:57:35 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:57:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21569 DF PROTO=TCP SPT=44318 DPT=9102 SEQ=930796333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645418D0000000001030307) Feb 1 03:57:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27303 DF PROTO=TCP SPT=43764 DPT=9100 SEQ=1052198409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6454E0E0000000001030307) Feb 1 03:57:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17496 DF PROTO=TCP SPT=59604 DPT=9101 SEQ=2326194972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645570D0000000001030307) Feb 1 03:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:57:43 localhost systemd[1]: tmp-crun.c0sbFG.mount: Deactivated successfully. Feb 1 03:57:43 localhost podman[106373]: 2026-02-01 08:57:43.872975986 +0000 UTC m=+0.086801379 container health_status 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 1 03:57:43 localhost podman[106373]: 2026-02-01 08:57:43.884896918 +0000 UTC m=+0.098722301 container exec_died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20260112.1, io.openshift.expose-services=) Feb 1 03:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:57:43 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Deactivated successfully. Feb 1 03:57:43 localhost podman[106392]: 2026-02-01 08:57:43.962030475 +0000 UTC m=+0.069607163 container health_status e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:57:43 localhost podman[106392]: 2026-02-01 08:57:43.976698342 +0000 UTC m=+0.084275010 container exec_died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.13, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 1 03:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:57:43 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Deactivated successfully. Feb 1 03:57:44 localhost podman[106412]: 2026-02-01 08:57:44.061948151 +0000 UTC m=+0.072789702 container health_status 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:57:44 localhost podman[106412]: 2026-02-01 08:57:44.069886689 +0000 UTC m=+0.080728220 container exec_died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:57:44 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Deactivated successfully. Feb 1 03:57:44 localhost podman[106411]: 2026-02-01 08:57:44.120907591 +0000 UTC m=+0.133575008 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, container_name=nova_compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:57:44 localhost podman[106411]: 2026-02-01 08:57:44.141724401 +0000 UTC m=+0.154391818 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 1 03:57:44 localhost podman[106411]: unhealthy Feb 1 03:57:44 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:57:44 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:57:44 localhost podman[106413]: Error: container 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c is not running Feb 1 03:57:44 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Main process exited, code=exited, status=125/n/a Feb 1 03:57:44 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed with result 'exit-code'. Feb 1 03:57:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22117 DF PROTO=TCP SPT=51962 DPT=9882 SEQ=2863784275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645650D0000000001030307) Feb 1 03:57:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21571 DF PROTO=TCP SPT=44318 DPT=9102 SEQ=930796333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645710D0000000001030307) Feb 1 03:57:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:57:50 localhost podman[106463]: 2026-02-01 08:57:50.848570824 +0000 UTC m=+0.070078458 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, release=1766032510, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:57:51 localhost podman[106463]: 2026-02-01 08:57:51.207042417 +0000 UTC m=+0.428550041 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, container_name=nova_migration_target, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64) Feb 1 03:57:51 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:57:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27305 DF PROTO=TCP SPT=43764 DPT=9100 SEQ=1052198409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6457F0D0000000001030307) Feb 1 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:57:52 localhost systemd[1]: tmp-crun.vBIcv5.mount: Deactivated successfully. Feb 1 03:57:52 localhost podman[106485]: 2026-02-01 08:57:52.87135221 +0000 UTC m=+0.084581339 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, container_name=ovn_metadata_agent, release=1766032510, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5) Feb 1 03:57:52 localhost podman[106486]: 2026-02-01 08:57:52.910550953 +0000 UTC m=+0.123850214 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., release=1766032510) Feb 1 03:57:52 localhost podman[106486]: 2026-02-01 08:57:52.922328361 +0000 UTC m=+0.135627622 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, config_id=tripleo_step4, release=1766032510, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 1 03:57:52 localhost podman[106486]: unhealthy Feb 1 03:57:52 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:57:52 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:57:52 localhost podman[106485]: 2026-02-01 08:57:52.962914957 +0000 UTC m=+0.176144046 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, build-date=2026-01-12T22:56:19Z, release=1766032510, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 1 03:57:52 localhost podman[106485]: unhealthy Feb 1 03:57:52 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:57:52 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:57:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33389 DF PROTO=TCP SPT=37260 DPT=9101 SEQ=3828165523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6458B0D0000000001030307) Feb 1 03:57:56 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:57:56 localhost recover_tripleo_nova_virtqemud[106527]: 62016 Feb 1 03:57:56 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:57:56 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:58:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36115 DF PROTO=TCP SPT=46678 DPT=9882 SEQ=1152359193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6459D2C0000000001030307) Feb 1 03:58:00 localhost podman[106189]: time="2026-02-01T08:58:00Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Feb 1 03:58:00 localhost systemd[1]: libpod-79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.scope: Deactivated successfully. Feb 1 03:58:00 localhost systemd[1]: libpod-79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.scope: Consumed 6.232s CPU time. Feb 1 03:58:00 localhost podman[106189]: 2026-02-01 08:58:00.306146043 +0000 UTC m=+42.082398570 container stop 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:58:00 localhost podman[106189]: 2026-02-01 08:58:00.339769842 +0000 UTC m=+42.116022389 container died 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, distribution-scope=public, release=1766032510, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible) Feb 1 03:58:00 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.timer: Deactivated successfully. Feb 1 03:58:00 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c. Feb 1 03:58:00 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed to open /run/systemd/transient/79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: No such file or directory Feb 1 03:58:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c-userdata-shm.mount: Deactivated successfully. Feb 1 03:58:00 localhost systemd[1]: var-lib-containers-storage-overlay-19867aa9ce07feb42ab4d071eed0ec581b8be5de4a737b08d8913c4970e7b3a5-merged.mount: Deactivated successfully. Feb 1 03:58:00 localhost podman[106189]: 2026-02-01 08:58:00.388774111 +0000 UTC m=+42.165026598 container cleanup 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team) Feb 1 03:58:00 localhost podman[106189]: ceilometer_agent_ipmi Feb 1 03:58:00 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.timer: Failed to open /run/systemd/transient/79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.timer: No such file or directory Feb 1 03:58:00 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed to open /run/systemd/transient/79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: No such file or directory Feb 1 03:58:00 localhost podman[106529]: 2026-02-01 08:58:00.436438538 +0000 UTC m=+0.117770855 container cleanup 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Feb 1 03:58:00 localhost systemd[1]: libpod-conmon-79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.scope: Deactivated successfully. Feb 1 03:58:00 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.timer: Failed to open /run/systemd/transient/79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.timer: No such file or directory Feb 1 03:58:00 localhost systemd[1]: 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: Failed to open /run/systemd/transient/79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c.service: No such file or directory Feb 1 03:58:00 localhost podman[106546]: 2026-02-01 08:58:00.536746867 +0000 UTC m=+0.065154833 container cleanup 79d19d207a1ed8a10afe7744d76c358403339bb60509d58b24293c2dabffe10c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '63e53a2f3cd2422147592f2c2c6c2f61'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true) Feb 1 03:58:00 localhost podman[106546]: ceilometer_agent_ipmi Feb 1 03:58:00 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Feb 1 03:58:00 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Feb 1 03:58:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36116 DF PROTO=TCP SPT=46678 DPT=9882 SEQ=1152359193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645A14D0000000001030307) Feb 1 03:58:01 localhost python3.9[106648]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:58:02 localhost systemd[1]: Reloading. Feb 1 03:58:02 localhost systemd-sysv-generator[106676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:58:02 localhost systemd-rc-local-generator[106672]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:58:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:58:02 localhost systemd[1]: Stopping collectd container... Feb 1 03:58:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36117 DF PROTO=TCP SPT=46678 DPT=9882 SEQ=1152359193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645A94D0000000001030307) Feb 1 03:58:05 localhost systemd[1]: libpod-e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.scope: Deactivated successfully. Feb 1 03:58:05 localhost systemd[1]: libpod-e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.scope: Consumed 2.107s CPU time. Feb 1 03:58:05 localhost podman[106689]: 2026-02-01 08:58:05.331147475 +0000 UTC m=+2.416318807 container stop e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, version=17.1.13, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, io.openshift.expose-services=, distribution-scope=public) Feb 1 03:58:05 localhost podman[106689]: 2026-02-01 08:58:05.360031686 +0000 UTC m=+2.445202978 container died e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true) Feb 1 03:58:05 localhost systemd[1]: tmp-crun.BGh5Ss.mount: Deactivated successfully. Feb 1 03:58:05 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.timer: Deactivated successfully. Feb 1 03:58:05 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2. Feb 1 03:58:05 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Failed to open /run/systemd/transient/e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: No such file or directory Feb 1 03:58:05 localhost podman[106689]: 2026-02-01 08:58:05.428944076 +0000 UTC m=+2.514115358 container cleanup e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:58:05 localhost podman[106689]: collectd Feb 1 03:58:05 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.timer: Failed to open /run/systemd/transient/e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.timer: No such file or directory Feb 1 03:58:05 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Failed to open /run/systemd/transient/e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: No such file or directory Feb 1 03:58:05 localhost podman[106701]: 2026-02-01 08:58:05.453425359 +0000 UTC m=+0.110164747 container cleanup e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, config_id=tripleo_step3, container_name=collectd, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 1 03:58:05 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:58:05 localhost systemd[1]: libpod-conmon-e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.scope: Deactivated successfully. Feb 1 03:58:05 localhost podman[106731]: error opening file `/run/crun/e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2/status`: No such file or directory Feb 1 03:58:05 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.timer: Failed to open /run/systemd/transient/e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.timer: No such file or directory Feb 1 03:58:05 localhost systemd[1]: e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: Failed to open /run/systemd/transient/e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2.service: No such file or directory Feb 1 03:58:05 localhost podman[106719]: 2026-02-01 08:58:05.561524592 +0000 UTC m=+0.077101186 container cleanup e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, build-date=2026-01-12T22:10:15Z, container_name=collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=) Feb 1 03:58:05 localhost podman[106719]: collectd Feb 1 03:58:05 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Feb 1 03:58:05 localhost systemd[1]: Stopped collectd container. Feb 1 03:58:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:58:05 localhost podman[106780]: 2026-02-01 08:58:05.876862751 +0000 UTC m=+0.085017234 container health_status 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 1 03:58:06 localhost podman[106780]: 2026-02-01 08:58:06.10317305 +0000 UTC m=+0.311327533 container exec_died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd) Feb 1 03:58:06 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Deactivated successfully. Feb 1 03:58:06 localhost python3.9[106852]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:58:06 localhost systemd[1]: Reloading. Feb 1 03:58:06 localhost systemd-sysv-generator[106883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:58:06 localhost systemd-rc-local-generator[106877]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:58:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:58:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42357 DF PROTO=TCP SPT=39894 DPT=9102 SEQ=1683018467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645B6CD0000000001030307) Feb 1 03:58:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e861c060c0ea21c0188d51464d865661f600bc180bfdb1205536c0574a9a82a2-userdata-shm.mount: Deactivated successfully. Feb 1 03:58:06 localhost systemd[1]: var-lib-containers-storage-overlay-8f493ed320f2136eba98c6f6d73d7580e3273443b9599c34d1438e87453daf45-merged.mount: Deactivated successfully. Feb 1 03:58:06 localhost systemd[1]: Stopping iscsid container... Feb 1 03:58:06 localhost systemd[1]: libpod-28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.scope: Deactivated successfully. Feb 1 03:58:06 localhost systemd[1]: libpod-28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.scope: Consumed 1.005s CPU time. Feb 1 03:58:06 localhost podman[106893]: 2026-02-01 08:58:06.731806783 +0000 UTC m=+0.079139399 container died 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, io.openshift.expose-services=, version=17.1.13) Feb 1 03:58:06 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.timer: Deactivated successfully. Feb 1 03:58:06 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504. Feb 1 03:58:06 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Failed to open /run/systemd/transient/28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: No such file or directory Feb 1 03:58:06 localhost systemd[1]: tmp-crun.XN5HDE.mount: Deactivated successfully. Feb 1 03:58:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504-userdata-shm.mount: Deactivated successfully. Feb 1 03:58:06 localhost podman[106893]: 2026-02-01 08:58:06.773093532 +0000 UTC m=+0.120426158 container cleanup 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, container_name=iscsid) Feb 1 03:58:06 localhost podman[106893]: iscsid Feb 1 03:58:06 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.timer: Failed to open /run/systemd/transient/28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.timer: No such file or directory Feb 1 03:58:06 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Failed to open /run/systemd/transient/28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: No such file or directory Feb 1 03:58:06 localhost podman[106905]: 2026-02-01 08:58:06.810349184 +0000 UTC m=+0.073169124 container cleanup 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 1 03:58:06 localhost systemd[1]: libpod-conmon-28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.scope: Deactivated successfully. Feb 1 03:58:06 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.timer: Failed to open /run/systemd/transient/28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.timer: No such file or directory Feb 1 03:58:06 localhost systemd[1]: 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: Failed to open /run/systemd/transient/28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504.service: No such file or directory Feb 1 03:58:06 localhost podman[106923]: 2026-02-01 08:58:06.912909393 +0000 UTC m=+0.068996093 container cleanup 28eba75e28267ca71a7c3445962ab0b9db8f4cbb8ecd47fe22abfa4336b1c504 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, io.buildah.version=1.41.5, vendor=Red Hat, Inc., batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 03:58:06 localhost podman[106923]: iscsid Feb 1 03:58:06 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Feb 1 03:58:06 localhost systemd[1]: Stopped iscsid container. Feb 1 03:58:07 localhost systemd[1]: var-lib-containers-storage-overlay-179e7ed4ab403439e752a2c426c6db4ca9807018662c061e320fe01562a6e116-merged.mount: Deactivated successfully. Feb 1 03:58:07 localhost python3.9[107026]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:58:07 localhost systemd[1]: Reloading. Feb 1 03:58:07 localhost systemd-sysv-generator[107049]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:58:07 localhost systemd-rc-local-generator[107046]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:58:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:58:08 localhost systemd[1]: Stopping logrotate_crond container... Feb 1 03:58:08 localhost systemd[1]: libpod-07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.scope: Deactivated successfully. Feb 1 03:58:08 localhost podman[107066]: 2026-02-01 08:58:08.11094169 +0000 UTC m=+0.067334932 container died 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4) Feb 1 03:58:08 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.timer: Deactivated successfully. Feb 1 03:58:08 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7. Feb 1 03:58:08 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Failed to open /run/systemd/transient/07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: No such file or directory Feb 1 03:58:08 localhost podman[107066]: 2026-02-01 08:58:08.16416426 +0000 UTC m=+0.120557512 container cleanup 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, distribution-scope=public, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 1 03:58:08 localhost podman[107066]: logrotate_crond Feb 1 03:58:08 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.timer: Failed to open /run/systemd/transient/07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.timer: No such file or directory Feb 1 03:58:08 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Failed to open /run/systemd/transient/07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: No such file or directory Feb 1 03:58:08 localhost podman[107080]: 2026-02-01 08:58:08.200758542 +0000 UTC m=+0.078043216 container cleanup 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 1 03:58:08 localhost systemd[1]: libpod-conmon-07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.scope: Deactivated successfully. Feb 1 03:58:08 localhost podman[107106]: error opening file `/run/crun/07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7/status`: No such file or directory Feb 1 03:58:08 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.timer: Failed to open /run/systemd/transient/07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.timer: No such file or directory Feb 1 03:58:08 localhost systemd[1]: 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: Failed to open /run/systemd/transient/07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7.service: No such file or directory Feb 1 03:58:08 localhost podman[107095]: 2026-02-01 08:58:08.310747533 +0000 UTC m=+0.078111197 container cleanup 07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, vcs-type=git, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:58:08 localhost podman[107095]: logrotate_crond Feb 1 03:58:08 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Feb 1 03:58:08 localhost systemd[1]: Stopped logrotate_crond container. Feb 1 03:58:08 localhost systemd[1]: var-lib-containers-storage-overlay-d2793eb0d727691e97e5e2f52ec5e9822efebe0b6bf32e0fb26a5897fd53d53c-merged.mount: Deactivated successfully. Feb 1 03:58:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07fa29680495e8092117ae3481a9591dd99a632431ea586d039da8eb996b3aa7-userdata-shm.mount: Deactivated successfully. Feb 1 03:58:09 localhost python3.9[107199]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:58:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2294 DF PROTO=TCP SPT=47384 DPT=9100 SEQ=302641157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645C34D0000000001030307) Feb 1 03:58:10 localhost systemd[1]: Reloading. Feb 1 03:58:10 localhost systemd-sysv-generator[107228]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:58:10 localhost systemd-rc-local-generator[107224]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:58:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:58:10 localhost systemd[1]: Stopping metrics_qdr container... Feb 1 03:58:10 localhost kernel: qdrouterd[54747]: segfault at 0 ip 00007fc4eedee7cb sp 00007ffc5069d3e0 error 4 in libc.so.6[7fc4eed8b000+175000] Feb 1 03:58:10 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Feb 1 03:58:10 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Feb 1 03:58:10 localhost systemd[1]: Started Process Core Dump (PID 107255/UID 0). Feb 1 03:58:10 localhost systemd-coredump[107256]: Resource limits disable core dumping for process 54747 (qdrouterd). Feb 1 03:58:10 localhost systemd-coredump[107256]: Process 54747 (qdrouterd) of user 42465 dumped core. Feb 1 03:58:10 localhost systemd[1]: systemd-coredump@0-107255-0.service: Deactivated successfully. Feb 1 03:58:10 localhost podman[107240]: 2026-02-01 08:58:10.735717138 +0000 UTC m=+0.234089355 container died 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13) Feb 1 03:58:10 localhost systemd[1]: libpod-75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.scope: Deactivated successfully. Feb 1 03:58:10 localhost systemd[1]: libpod-75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.scope: Consumed 27.773s CPU time. Feb 1 03:58:10 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.timer: Deactivated successfully. Feb 1 03:58:10 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7. Feb 1 03:58:10 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Failed to open /run/systemd/transient/75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: No such file or directory Feb 1 03:58:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7-userdata-shm.mount: Deactivated successfully. Feb 1 03:58:10 localhost podman[107240]: 2026-02-01 08:58:10.779927817 +0000 UTC m=+0.278300074 container cleanup 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1) Feb 1 03:58:10 localhost podman[107240]: metrics_qdr Feb 1 03:58:10 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.timer: Failed to open /run/systemd/transient/75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.timer: No such file or directory Feb 1 03:58:10 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Failed to open /run/systemd/transient/75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: No such file or directory Feb 1 03:58:10 localhost podman[107260]: 2026-02-01 08:58:10.81333914 +0000 UTC m=+0.067799256 container cleanup 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1) Feb 1 03:58:10 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Feb 1 03:58:10 localhost systemd[1]: libpod-conmon-75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.scope: Deactivated successfully. Feb 1 03:58:10 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.timer: Failed to open /run/systemd/transient/75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.timer: No such file or directory Feb 1 03:58:10 localhost systemd[1]: 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: Failed to open /run/systemd/transient/75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7.service: No such file or directory Feb 1 03:58:10 localhost podman[107275]: 2026-02-01 08:58:10.904818323 +0000 UTC m=+0.065582837 container cleanup 75c8a36d6f9eaf1389dcbf73d6bf05a74137a01275fa44aa87fda8dd481c01e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b8acc88e7150a91ea5eddde509e925f2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible) Feb 1 03:58:10 localhost podman[107275]: metrics_qdr Feb 1 03:58:10 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Feb 1 03:58:10 localhost systemd[1]: Stopped metrics_qdr container. Feb 1 03:58:11 localhost systemd[1]: var-lib-containers-storage-overlay-f747231ffc56e15c128dac75ec633f161eee676530b28d17cb7b8d0be7728054-merged.mount: Deactivated successfully. Feb 1 03:58:11 localhost python3.9[107379]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:58:12 localhost python3.9[107472]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:58:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25358 DF PROTO=TCP SPT=60424 DPT=9101 SEQ=733340608 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645D04E0000000001030307) Feb 1 03:58:14 localhost python3.9[107565]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:58:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:58:14 localhost podman[107659]: 2026-02-01 08:58:14.618124873 +0000 UTC m=+0.089371759 container health_status 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, tcib_managed=true, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 1 03:58:14 localhost podman[107659]: 2026-02-01 08:58:14.662619921 +0000 UTC m=+0.133866757 container exec_died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 1 03:58:14 localhost podman[107659]: unhealthy Feb 1 03:58:14 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:58:14 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:58:14 localhost python3.9[107658]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:58:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36119 DF PROTO=TCP SPT=46678 DPT=9882 SEQ=1152359193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645D90D0000000001030307) Feb 1 03:58:15 localhost systemd[1]: Reloading. Feb 1 03:58:15 localhost systemd-rc-local-generator[107708]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:58:15 localhost systemd-sysv-generator[107712]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:58:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:58:16 localhost systemd[1]: Stopping nova_compute container... Feb 1 03:58:16 localhost systemd[1]: tmp-crun.blxDUP.mount: Deactivated successfully. Feb 1 03:58:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42359 DF PROTO=TCP SPT=39894 DPT=9102 SEQ=1683018467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645E70D0000000001030307) Feb 1 03:58:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:58:21 localhost systemd[1]: tmp-crun.tKWm0K.mount: Deactivated successfully. Feb 1 03:58:21 localhost podman[107796]: 2026-02-01 08:58:21.638105233 +0000 UTC m=+0.105061229 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_id=tripleo_step4) Feb 1 03:58:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2296 DF PROTO=TCP SPT=47384 DPT=9100 SEQ=302641157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA645F30E0000000001030307) Feb 1 03:58:22 localhost podman[107796]: 2026-02-01 08:58:22.038781264 +0000 UTC m=+0.505737230 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, container_name=nova_migration_target, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1) Feb 1 03:58:22 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:58:23 localhost podman[107836]: 2026-02-01 08:58:23.867790116 +0000 UTC m=+0.082691661 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, architecture=x86_64, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:58:23 localhost podman[107836]: 2026-02-01 08:58:23.88075665 +0000 UTC m=+0.095658225 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, version=17.1.13, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller) Feb 1 03:58:23 localhost podman[107836]: unhealthy Feb 1 03:58:23 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:58:23 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:58:23 localhost systemd[1]: tmp-crun.6yy6fr.mount: Deactivated successfully. Feb 1 03:58:23 localhost podman[107835]: 2026-02-01 08:58:23.971381378 +0000 UTC m=+0.188188762 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 1 03:58:23 localhost podman[107835]: 2026-02-01 08:58:23.986098587 +0000 UTC m=+0.202905801 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1) Feb 1 03:58:23 localhost podman[107835]: unhealthy Feb 1 03:58:23 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:58:23 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:58:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25360 DF PROTO=TCP SPT=60424 DPT=9101 SEQ=733340608 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646010D0000000001030307) Feb 1 03:58:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38260 DF PROTO=TCP SPT=49670 DPT=9882 SEQ=485417032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646129B0000000001030307) Feb 1 03:58:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38261 DF PROTO=TCP SPT=49670 DPT=9882 SEQ=485417032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646168E0000000001030307) Feb 1 03:58:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38262 DF PROTO=TCP SPT=49670 DPT=9882 SEQ=485417032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6461E8D0000000001030307) Feb 1 03:58:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42700 DF PROTO=TCP SPT=45450 DPT=9102 SEQ=4184278850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6462C0D0000000001030307) Feb 1 03:58:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10366 DF PROTO=TCP SPT=33804 DPT=9100 SEQ=252398568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646388D0000000001030307) Feb 1 03:58:42 localhost sshd[107876]: main: sshd: ssh-rsa algorithm is disabled Feb 1 03:58:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10581 DF PROTO=TCP SPT=45426 DPT=9101 SEQ=1263811756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646458D0000000001030307) Feb 1 03:58:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:58:44 localhost podman[107878]: Error: container 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e is not running Feb 1 03:58:44 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Main process exited, code=exited, status=125/n/a Feb 1 03:58:44 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed with result 'exit-code'. Feb 1 03:58:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38264 DF PROTO=TCP SPT=49670 DPT=9882 SEQ=485417032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6464F0D0000000001030307) Feb 1 03:58:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42702 DF PROTO=TCP SPT=45450 DPT=9102 SEQ=4184278850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6465D0D0000000001030307) Feb 1 03:58:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10368 DF PROTO=TCP SPT=33804 DPT=9100 SEQ=252398568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646690E0000000001030307) Feb 1 03:58:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:58:52 localhost podman[107889]: 2026-02-01 08:58:52.617957868 +0000 UTC m=+0.083681842 container health_status 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:58:53 localhost podman[107889]: 2026-02-01 08:58:53.009476032 +0000 UTC m=+0.475199956 container exec_died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, url=https://www.redhat.com, version=17.1.13) Feb 1 03:58:53 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Deactivated successfully. Feb 1 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:58:54 localhost podman[107912]: 2026-02-01 08:58:54.871243217 +0000 UTC m=+0.086073576 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, architecture=x86_64, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:58:54 localhost podman[107912]: 2026-02-01 08:58:54.88672151 +0000 UTC m=+0.101551829 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1766032510) Feb 1 03:58:54 localhost podman[107912]: unhealthy Feb 1 03:58:54 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:58:54 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:58:54 localhost podman[107913]: 2026-02-01 08:58:54.981044673 +0000 UTC m=+0.192166817 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:58:55 localhost podman[107913]: 2026-02-01 08:58:55.023786096 +0000 UTC m=+0.234908200 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller) Feb 1 03:58:55 localhost podman[107913]: unhealthy Feb 1 03:58:55 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:58:55 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:58:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10583 DF PROTO=TCP SPT=45426 DPT=9101 SEQ=1263811756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646750D0000000001030307) Feb 1 03:58:58 localhost podman[107721]: time="2026-02-01T08:58:58Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Feb 1 03:58:58 localhost systemd[1]: libpod-1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.scope: Deactivated successfully. Feb 1 03:58:58 localhost systemd[1]: libpod-1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.scope: Consumed 27.818s CPU time. Feb 1 03:58:58 localhost podman[107721]: 2026-02-01 08:58:58.302273098 +0000 UTC m=+42.088728432 container stop 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=nova_compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:58:58 localhost podman[107721]: 2026-02-01 08:58:58.336570788 +0000 UTC m=+42.123026102 container died 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, distribution-scope=public) Feb 1 03:58:58 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.timer: Deactivated successfully. Feb 1 03:58:58 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e. Feb 1 03:58:58 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed to open /run/systemd/transient/1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: No such file or directory Feb 1 03:58:58 localhost systemd[1]: tmp-crun.rzzXw4.mount: Deactivated successfully. Feb 1 03:58:58 localhost systemd[1]: var-lib-containers-storage-overlay-66409f2cae0cc3fdf46266cf7a9b4ef7f2208d64cf24e912c16b5d672be00b92-merged.mount: Deactivated successfully. Feb 1 03:58:58 localhost podman[107721]: 2026-02-01 08:58:58.447790408 +0000 UTC m=+42.234245732 container cleanup 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, release=1766032510, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 03:58:58 localhost podman[107721]: nova_compute Feb 1 03:58:58 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.timer: Failed to open /run/systemd/transient/1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.timer: No such file or directory Feb 1 03:58:58 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed to open /run/systemd/transient/1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: No such file or directory Feb 1 03:58:58 localhost podman[107953]: 2026-02-01 08:58:58.464540721 +0000 UTC m=+0.146200933 container cleanup 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, release=1766032510, version=17.1.13, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 1 03:58:58 localhost systemd[1]: libpod-conmon-1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.scope: Deactivated successfully. Feb 1 03:58:58 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.timer: Failed to open /run/systemd/transient/1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.timer: No such file or directory Feb 1 03:58:58 localhost systemd[1]: 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: Failed to open /run/systemd/transient/1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e.service: No such file or directory Feb 1 03:58:58 localhost podman[107967]: 2026-02-01 08:58:58.542519573 +0000 UTC m=+0.051849248 container cleanup 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=) Feb 1 03:58:58 localhost podman[107967]: nova_compute Feb 1 03:58:58 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Feb 1 03:58:58 localhost systemd[1]: Stopped nova_compute container. Feb 1 03:58:58 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.101s CPU time, no IO. Feb 1 03:58:59 localhost python3.9[108071]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:58:59 localhost systemd[1]: Reloading. Feb 1 03:58:59 localhost systemd-rc-local-generator[108097]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:58:59 localhost systemd-sysv-generator[108102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:58:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:58:59 localhost systemd[1]: Stopping nova_migration_target container... Feb 1 03:58:59 localhost systemd[1]: libpod-080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.scope: Deactivated successfully. Feb 1 03:58:59 localhost systemd[1]: libpod-080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.scope: Consumed 33.437s CPU time. Feb 1 03:58:59 localhost podman[108112]: 2026-02-01 08:58:59.833040565 +0000 UTC m=+0.079241093 container died 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Feb 1 03:58:59 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.timer: Deactivated successfully. Feb 1 03:58:59 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96. Feb 1 03:58:59 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Failed to open /run/systemd/transient/080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: No such file or directory Feb 1 03:58:59 localhost systemd[1]: tmp-crun.OE4cvd.mount: Deactivated successfully. Feb 1 03:58:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96-userdata-shm.mount: Deactivated successfully. Feb 1 03:58:59 localhost podman[108112]: 2026-02-01 08:58:59.882709785 +0000 UTC m=+0.128910243 container cleanup 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:58:59 localhost podman[108112]: nova_migration_target Feb 1 03:58:59 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.timer: Failed to open /run/systemd/transient/080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.timer: No such file or directory Feb 1 03:58:59 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Failed to open /run/systemd/transient/080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: No such file or directory Feb 1 03:58:59 localhost podman[108124]: 2026-02-01 08:58:59.921606909 +0000 UTC m=+0.079292476 container cleanup 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, tcib_managed=true) Feb 1 03:58:59 localhost systemd[1]: libpod-conmon-080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.scope: Deactivated successfully. Feb 1 03:59:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=360 DF PROTO=TCP SPT=46140 DPT=9882 SEQ=139889354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646878C0000000001030307) Feb 1 03:59:00 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.timer: Failed to open /run/systemd/transient/080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.timer: No such file or directory Feb 1 03:59:00 localhost systemd[1]: 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: Failed to open /run/systemd/transient/080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96.service: No such file or directory Feb 1 03:59:00 localhost podman[108141]: 2026-02-01 08:59:00.025020335 +0000 UTC m=+0.072475512 container cleanup 080dbed59cc2bdf7ec953594b316706c298919e7e0007b17239bca57f337ed96 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 1 03:59:00 localhost podman[108141]: nova_migration_target Feb 1 03:59:00 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Feb 1 03:59:00 localhost systemd[1]: Stopped nova_migration_target container. Feb 1 03:59:00 localhost python3.9[108245]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 03:59:00 localhost systemd[1]: var-lib-containers-storage-overlay-8fb1968646de61e5d6c5b7938dce54da276edc06f0bc75651b588722ba09cba1-merged.mount: Deactivated successfully. Feb 1 03:59:00 localhost systemd[1]: Reloading. Feb 1 03:59:00 localhost systemd-sysv-generator[108275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 03:59:00 localhost systemd-rc-local-generator[108271]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 03:59:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=361 DF PROTO=TCP SPT=46140 DPT=9882 SEQ=139889354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6468B8D0000000001030307) Feb 1 03:59:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 03:59:01 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Feb 1 03:59:01 localhost systemd[1]: libpod-4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa.scope: Deactivated successfully. Feb 1 03:59:01 localhost podman[108286]: 2026-02-01 08:59:01.324341661 +0000 UTC m=+0.072338587 container stop 4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, container_name=nova_virtlogd_wrapper, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, architecture=x86_64) Feb 1 03:59:01 localhost podman[108286]: 2026-02-01 08:59:01.360500269 +0000 UTC m=+0.108497155 container died 4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64) Feb 1 03:59:01 localhost systemd[1]: tmp-crun.5ffSDJ.mount: Deactivated successfully. Feb 1 03:59:01 localhost podman[108286]: 2026-02-01 08:59:01.399513406 +0000 UTC m=+0.147510272 container cleanup 4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, container_name=nova_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step3, distribution-scope=public, build-date=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}) Feb 1 03:59:01 localhost podman[108286]: nova_virtlogd_wrapper Feb 1 03:59:01 localhost podman[108298]: 2026-02-01 08:59:01.460022504 +0000 UTC m=+0.116793865 container cleanup 4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_step3) Feb 1 03:59:01 localhost systemd[1]: var-lib-containers-storage-overlay-bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c-merged.mount: Deactivated successfully. Feb 1 03:59:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa-userdata-shm.mount: Deactivated successfully. Feb 1 03:59:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=362 DF PROTO=TCP SPT=46140 DPT=9882 SEQ=139889354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646938E0000000001030307) Feb 1 03:59:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4763 DF PROTO=TCP SPT=46602 DPT=9102 SEQ=3686674674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646A10D0000000001030307) Feb 1 03:59:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57105 DF PROTO=TCP SPT=50090 DPT=9100 SEQ=2105554257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646AD8D0000000001030307) Feb 1 03:59:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52467 DF PROTO=TCP SPT=59864 DPT=9101 SEQ=3165624299 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646BACD0000000001030307) Feb 1 03:59:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=364 DF PROTO=TCP SPT=46140 DPT=9882 SEQ=139889354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646C30D0000000001030307) Feb 1 03:59:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4765 DF PROTO=TCP SPT=46602 DPT=9102 SEQ=3686674674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646D10D0000000001030307) Feb 1 03:59:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57107 DF PROTO=TCP SPT=50090 DPT=9100 SEQ=2105554257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646DD0D0000000001030307) Feb 1 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:59:25 localhost podman[108391]: 2026-02-01 08:59:25.376942891 +0000 UTC m=+0.091322940 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 03:59:25 localhost podman[108391]: 2026-02-01 08:59:25.399538206 +0000 UTC m=+0.113918255 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 03:59:25 localhost podman[108392]: 2026-02-01 08:59:25.441767493 +0000 UTC m=+0.156194193 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13) Feb 1 03:59:25 localhost podman[108391]: unhealthy Feb 1 03:59:25 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:59:25 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:59:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52469 DF PROTO=TCP SPT=59864 DPT=9101 SEQ=3165624299 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646EB0D0000000001030307) Feb 1 03:59:25 localhost podman[108392]: 2026-02-01 08:59:25.506800833 +0000 UTC m=+0.221227573 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.13, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller) Feb 1 03:59:25 localhost podman[108392]: unhealthy Feb 1 03:59:25 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:59:25 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 03:59:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 1 03:59:28 localhost recover_tripleo_nova_virtqemud[108429]: 62016 Feb 1 03:59:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 1 03:59:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 1 03:59:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46460 DF PROTO=TCP SPT=45212 DPT=9882 SEQ=530326567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA646FCBC0000000001030307) Feb 1 03:59:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46461 DF PROTO=TCP SPT=45212 DPT=9882 SEQ=530326567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64700CD0000000001030307) Feb 1 03:59:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46462 DF PROTO=TCP SPT=45212 DPT=9882 SEQ=530326567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64708CE0000000001030307) Feb 1 03:59:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15831 DF PROTO=TCP SPT=38952 DPT=9102 SEQ=3518675689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647164E0000000001030307) Feb 1 03:59:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61418 DF PROTO=TCP SPT=48410 DPT=9100 SEQ=3396714805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64722CE0000000001030307) Feb 1 03:59:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41673 DF PROTO=TCP SPT=39862 DPT=9101 SEQ=1408653870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647300D0000000001030307) Feb 1 03:59:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46464 DF PROTO=TCP SPT=45212 DPT=9882 SEQ=530326567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647390D0000000001030307) Feb 1 03:59:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15833 DF PROTO=TCP SPT=38952 DPT=9102 SEQ=3518675689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647470D0000000001030307) Feb 1 03:59:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61420 DF PROTO=TCP SPT=48410 DPT=9100 SEQ=3396714805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647530D0000000001030307) Feb 1 03:59:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41675 DF PROTO=TCP SPT=39862 DPT=9101 SEQ=1408653870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647610D0000000001030307) Feb 1 03:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 03:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 03:59:55 localhost podman[108430]: 2026-02-01 08:59:55.87595792 +0000 UTC m=+0.087111749 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git) Feb 1 03:59:55 localhost podman[108430]: 2026-02-01 08:59:55.892925489 +0000 UTC m=+0.104079308 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Feb 1 03:59:55 localhost systemd[1]: tmp-crun.oti1IQ.mount: Deactivated successfully. Feb 1 03:59:55 localhost podman[108431]: 2026-02-01 08:59:55.938232853 +0000 UTC m=+0.145715727 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 03:59:55 localhost podman[108430]: unhealthy Feb 1 03:59:55 localhost podman[108431]: 2026-02-01 08:59:55.95674081 +0000 UTC m=+0.164223694 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, vendor=Red Hat, Inc., container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller) Feb 1 03:59:55 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:59:55 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 03:59:55 localhost podman[108431]: unhealthy Feb 1 03:59:55 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 03:59:55 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 04:00:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7057 DF PROTO=TCP SPT=41244 DPT=9882 SEQ=397158234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64771ED0000000001030307) Feb 1 04:00:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7058 DF PROTO=TCP SPT=41244 DPT=9882 SEQ=397158234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647760D0000000001030307) Feb 1 04:00:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7059 DF PROTO=TCP SPT=41244 DPT=9882 SEQ=397158234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6477E0E0000000001030307) Feb 1 04:00:05 localhost sshd[108468]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:00:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45584 DF PROTO=TCP SPT=53482 DPT=9102 SEQ=864165797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6478B8D0000000001030307) Feb 1 04:00:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50678 DF PROTO=TCP SPT=36468 DPT=9100 SEQ=3026304708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647980D0000000001030307) Feb 1 04:00:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41676 DF PROTO=TCP SPT=39862 DPT=9101 SEQ=1408653870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647A10D0000000001030307) Feb 1 04:00:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7061 DF PROTO=TCP SPT=41244 DPT=9882 SEQ=397158234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647AF0D0000000001030307) Feb 1 04:00:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45586 DF PROTO=TCP SPT=53482 DPT=9102 SEQ=864165797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647BB0D0000000001030307) Feb 1 04:00:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50680 DF PROTO=TCP SPT=36468 DPT=9100 SEQ=3026304708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647C90D0000000001030307) Feb 1 04:00:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12287 DF PROTO=TCP SPT=53502 DPT=9101 SEQ=3656957715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647D50E0000000001030307) Feb 1 04:00:25 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Feb 1 04:00:25 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61244 (conmon) with signal SIGKILL. Feb 1 04:00:25 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Feb 1 04:00:25 localhost systemd[1]: libpod-conmon-4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa.scope: Deactivated successfully. Feb 1 04:00:25 localhost systemd[1]: tmp-crun.qagyoO.mount: Deactivated successfully. Feb 1 04:00:25 localhost podman[108559]: error opening file `/run/crun/4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa/status`: No such file or directory Feb 1 04:00:25 localhost podman[108545]: 2026-02-01 09:00:25.5457218 +0000 UTC m=+0.079077877 container cleanup 4e95a1e950181e0667190c6bd97db8923001994e00adc494730e9ca958d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-type=git, distribution-scope=public, container_name=nova_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1766032510, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_id=tripleo_step3, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13) Feb 1 04:00:25 localhost podman[108545]: nova_virtlogd_wrapper Feb 1 04:00:25 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Feb 1 04:00:25 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Feb 1 04:00:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 04:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 04:00:26 localhost podman[108653]: 2026-02-01 09:00:26.102180831 +0000 UTC m=+0.091058311 container health_status e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:00:26 localhost podman[108654]: 2026-02-01 09:00:26.153394419 +0000 UTC m=+0.140577877 container health_status e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, release=1766032510, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 1 04:00:26 localhost podman[108653]: 2026-02-01 09:00:26.169490771 +0000 UTC m=+0.158368211 container exec_died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 04:00:26 localhost podman[108653]: unhealthy Feb 1 04:00:26 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:00:26 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed with result 'exit-code'. Feb 1 04:00:26 localhost podman[108654]: 2026-02-01 09:00:26.194675117 +0000 UTC m=+0.181858605 container exec_died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, release=1766032510, architecture=x86_64) Feb 1 04:00:26 localhost podman[108654]: unhealthy Feb 1 04:00:26 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:00:26 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed with result 'exit-code'. Feb 1 04:00:26 localhost python3.9[108652]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:00:26 localhost systemd[1]: Reloading. Feb 1 04:00:26 localhost systemd-rc-local-generator[108717]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:00:26 localhost systemd-sysv-generator[108722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:00:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:00:26 localhost systemd[1]: Stopping nova_virtnodedevd container... Feb 1 04:00:26 localhost systemd[1]: libpod-883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff.scope: Deactivated successfully. Feb 1 04:00:26 localhost systemd[1]: libpod-883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff.scope: Consumed 1.464s CPU time. Feb 1 04:00:26 localhost podman[108730]: 2026-02-01 09:00:26.764505925 +0000 UTC m=+0.082054411 container died 883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=nova_virtnodedevd, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, vendor=Red Hat, Inc.) Feb 1 04:00:26 localhost podman[108730]: 2026-02-01 09:00:26.807001811 +0000 UTC m=+0.124550297 container cleanup 883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.5, container_name=nova_virtnodedevd, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, managed_by=tripleo_ansible, version=17.1.13, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 04:00:26 localhost podman[108730]: nova_virtnodedevd Feb 1 04:00:26 localhost podman[108745]: 2026-02-01 09:00:26.859679284 +0000 UTC m=+0.074628609 container cleanup 883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, container_name=nova_virtnodedevd, config_id=tripleo_step3, version=17.1.13, release=1766032510, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container) Feb 1 04:00:26 localhost systemd[1]: libpod-conmon-883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff.scope: Deactivated successfully. Feb 1 04:00:26 localhost podman[108773]: error opening file `/run/crun/883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff/status`: No such file or directory Feb 1 04:00:26 localhost podman[108761]: 2026-02-01 09:00:26.955125792 +0000 UTC m=+0.061769408 container cleanup 883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, container_name=nova_virtnodedevd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 04:00:26 localhost podman[108761]: nova_virtnodedevd Feb 1 04:00:26 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Feb 1 04:00:26 localhost systemd[1]: Stopped nova_virtnodedevd container. Feb 1 04:00:27 localhost systemd[1]: var-lib-containers-storage-overlay-7b9f50aed1094cdf3c8ae90862135d9821bbb7f673296f42b1c4d115dfdd346a-merged.mount: Deactivated successfully. Feb 1 04:00:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-883f6f56ab9fa5aa479ce063c46f9ab4ceeecb724900013295141ba5cef97aff-userdata-shm.mount: Deactivated successfully. Feb 1 04:00:27 localhost python3.9[108866]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:00:27 localhost systemd[1]: Reloading. Feb 1 04:00:27 localhost systemd-rc-local-generator[108890]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:00:27 localhost systemd-sysv-generator[108896]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:00:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:00:28 localhost systemd[1]: Stopping nova_virtproxyd container... Feb 1 04:00:28 localhost systemd[1]: libpod-3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac.scope: Deactivated successfully. Feb 1 04:00:28 localhost podman[108906]: 2026-02-01 09:00:28.226071993 +0000 UTC m=+0.074251157 container died 3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtproxyd, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, build-date=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1766032510) Feb 1 04:00:28 localhost podman[108906]: 2026-02-01 09:00:28.266863656 +0000 UTC m=+0.115042790 container cleanup 3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, container_name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1766032510, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.5) Feb 1 04:00:28 localhost podman[108906]: nova_virtproxyd Feb 1 04:00:28 localhost podman[108920]: 2026-02-01 09:00:28.307206794 +0000 UTC m=+0.072513443 container cleanup 3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtproxyd, distribution-scope=public, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, config_id=tripleo_step3, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 04:00:28 localhost systemd[1]: libpod-conmon-3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac.scope: Deactivated successfully. Feb 1 04:00:28 localhost podman[108946]: error opening file `/run/crun/3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac/status`: No such file or directory Feb 1 04:00:28 localhost podman[108935]: 2026-02-01 09:00:28.416712181 +0000 UTC m=+0.072023898 container cleanup 3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=nova_virtproxyd, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 1 04:00:28 localhost podman[108935]: nova_virtproxyd Feb 1 04:00:28 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Feb 1 04:00:28 localhost systemd[1]: Stopped nova_virtproxyd container. Feb 1 04:00:28 localhost systemd[1]: var-lib-containers-storage-overlay-671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce-merged.mount: Deactivated successfully. Feb 1 04:00:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3165c6655234691f8c773165b4ab8c73e334d265dc9bbf78941b544f8a9449ac-userdata-shm.mount: Deactivated successfully. Feb 1 04:00:29 localhost python3.9[109041]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:00:29 localhost systemd[1]: Reloading. Feb 1 04:00:29 localhost systemd-rc-local-generator[109066]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:00:29 localhost systemd-sysv-generator[109073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:00:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:00:29 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Feb 1 04:00:29 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Feb 1 04:00:29 localhost systemd[1]: Stopping nova_virtqemud container... Feb 1 04:00:29 localhost systemd[1]: libpod-526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70.scope: Deactivated successfully. Feb 1 04:00:29 localhost systemd[1]: libpod-526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70.scope: Consumed 2.095s CPU time. Feb 1 04:00:29 localhost podman[109081]: 2026-02-01 09:00:29.56353611 +0000 UTC m=+0.076320673 container died 526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, version=17.1.13, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=nova_virtqemud, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3) Feb 1 04:00:29 localhost podman[109081]: 2026-02-01 09:00:29.598441309 +0000 UTC m=+0.111225842 container cleanup 526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-libvirt, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, distribution-scope=public, config_id=tripleo_step3, container_name=nova_virtqemud, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 1 04:00:29 localhost podman[109081]: nova_virtqemud Feb 1 04:00:29 localhost podman[109095]: 2026-02-01 09:00:29.638365694 +0000 UTC m=+0.056860015 container cleanup 526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, build-date=2026-01-12T23:31:49Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, version=17.1.13, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_virtqemud, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git) Feb 1 04:00:29 localhost systemd[1]: var-lib-containers-storage-overlay-457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64-merged.mount: Deactivated successfully. Feb 1 04:00:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70-userdata-shm.mount: Deactivated successfully. Feb 1 04:00:29 localhost systemd[1]: libpod-conmon-526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70.scope: Deactivated successfully. Feb 1 04:00:29 localhost podman[109121]: error opening file `/run/crun/526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70/status`: No such file or directory Feb 1 04:00:29 localhost podman[109109]: 2026-02-01 09:00:29.753082713 +0000 UTC m=+0.077425646 container cleanup 526ebca495097f81b426b805eea65425bd57d645348c5ecb89d3373156d69f70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtqemud, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git) Feb 1 04:00:29 localhost podman[109109]: nova_virtqemud Feb 1 04:00:29 localhost systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully. Feb 1 04:00:29 localhost systemd[1]: Stopped nova_virtqemud container. Feb 1 04:00:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54942 DF PROTO=TCP SPT=41284 DPT=9882 SEQ=3822266166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647E71C0000000001030307) Feb 1 04:00:30 localhost python3.9[109214]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:00:30 localhost systemd[1]: Reloading. Feb 1 04:00:30 localhost systemd-rc-local-generator[109239]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:00:30 localhost systemd-sysv-generator[109244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:00:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:00:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54943 DF PROTO=TCP SPT=41284 DPT=9882 SEQ=3822266166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647EB0D0000000001030307) Feb 1 04:00:31 localhost python3.9[109344]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:00:32 localhost systemd[1]: Reloading. Feb 1 04:00:32 localhost systemd-sysv-generator[109373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:00:32 localhost systemd-rc-local-generator[109370]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:00:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:00:33 localhost systemd[1]: Stopping nova_virtsecretd container... Feb 1 04:00:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54944 DF PROTO=TCP SPT=41284 DPT=9882 SEQ=3822266166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA647F30D0000000001030307) Feb 1 04:00:33 localhost systemd[1]: libpod-a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3.scope: Deactivated successfully. Feb 1 04:00:33 localhost podman[109385]: 2026-02-01 09:00:33.118885521 +0000 UTC m=+0.078350555 container died a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_virtsecretd, org.opencontainers.image.created=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git) Feb 1 04:00:33 localhost podman[109385]: 2026-02-01 09:00:33.152062417 +0000 UTC m=+0.111527411 container cleanup a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, distribution-scope=public, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.buildah.version=1.41.5) Feb 1 04:00:33 localhost podman[109385]: nova_virtsecretd Feb 1 04:00:33 localhost podman[109400]: 2026-02-01 09:00:33.212238443 +0000 UTC m=+0.078092477 container cleanup a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, org.opencontainers.image.created=2026-01-12T23:31:49Z, build-date=2026-01-12T23:31:49Z, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=nova_virtsecretd, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt) Feb 1 04:00:33 localhost systemd[1]: libpod-conmon-a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3.scope: Deactivated successfully. Feb 1 04:00:33 localhost podman[109427]: error opening file `/run/crun/a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3/status`: No such file or directory Feb 1 04:00:33 localhost podman[109415]: 2026-02-01 09:00:33.32208195 +0000 UTC m=+0.077998343 container cleanup a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, name=rhosp-rhel9/openstack-nova-libvirt, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, container_name=nova_virtsecretd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:31:49Z, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container) Feb 1 04:00:33 localhost podman[109415]: nova_virtsecretd Feb 1 04:00:33 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Feb 1 04:00:33 localhost systemd[1]: Stopped nova_virtsecretd container. Feb 1 04:00:34 localhost systemd[1]: var-lib-containers-storage-overlay-abe2e37cef3553dd7ed72567236ba15185ae0f96cf280ad9def2a9cdb2b0b4c7-merged.mount: Deactivated successfully. Feb 1 04:00:34 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a40eccdb34adf480cf82211333a94dba024795e8b0c70208a91e22bed3cf9ef3-userdata-shm.mount: Deactivated successfully. Feb 1 04:00:34 localhost python3.9[109520]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:00:35 localhost systemd[1]: Reloading. Feb 1 04:00:35 localhost systemd-rc-local-generator[109546]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:00:35 localhost systemd-sysv-generator[109550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:00:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:00:35 localhost systemd[1]: Stopping nova_virtstoraged container... Feb 1 04:00:35 localhost systemd[1]: libpod-39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5.scope: Deactivated successfully. Feb 1 04:00:35 localhost podman[109561]: 2026-02-01 09:00:35.575175464 +0000 UTC m=+0.075509047 container died 39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtstoraged, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-libvirt, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5) Feb 1 04:00:35 localhost systemd[1]: tmp-crun.PE60ms.mount: Deactivated successfully. Feb 1 04:00:35 localhost podman[109561]: 2026-02-01 09:00:35.623162511 +0000 UTC m=+0.123496044 container cleanup 39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtstoraged, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=) Feb 1 04:00:35 localhost podman[109561]: nova_virtstoraged Feb 1 04:00:35 localhost podman[109574]: 2026-02-01 09:00:35.650986729 +0000 UTC m=+0.065739953 container cleanup 39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vcs-type=git, version=17.1.13, release=1766032510, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtstoraged, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Feb 1 04:00:35 localhost systemd[1]: libpod-conmon-39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5.scope: Deactivated successfully. Feb 1 04:00:35 localhost podman[109603]: error opening file `/run/crun/39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5/status`: No such file or directory Feb 1 04:00:35 localhost podman[109591]: 2026-02-01 09:00:35.744249028 +0000 UTC m=+0.064462752 container cleanup 39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1296029e90a465a2201c8dc6f8be17e7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, distribution-scope=public, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, build-date=2026-01-12T23:31:49Z) Feb 1 04:00:35 localhost podman[109591]: nova_virtstoraged Feb 1 04:00:35 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Feb 1 04:00:35 localhost systemd[1]: Stopped nova_virtstoraged container. Feb 1 04:00:36 localhost python3.9[109696]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:00:36 localhost systemd[1]: Reloading. Feb 1 04:00:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1259 DF PROTO=TCP SPT=43364 DPT=9102 SEQ=3186521073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64800CD0000000001030307) Feb 1 04:00:36 localhost systemd-sysv-generator[109722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:00:36 localhost systemd-rc-local-generator[109719]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:00:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:00:36 localhost systemd[1]: tmp-crun.aIpB62.mount: Deactivated successfully. Feb 1 04:00:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39157644699ac29a43c584aadeae04badec11f2504b552a425c9256e9b3f3dc5-userdata-shm.mount: Deactivated successfully. Feb 1 04:00:36 localhost systemd[1]: var-lib-containers-storage-overlay-42956910233e56c0615893b331e8357f0bd5264eb11a7b97d46d18517d01f2f9-merged.mount: Deactivated successfully. Feb 1 04:00:36 localhost systemd[1]: Stopping ovn_controller container... Feb 1 04:00:36 localhost systemd[1]: libpod-e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.scope: Deactivated successfully. Feb 1 04:00:36 localhost systemd[1]: libpod-e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.scope: Consumed 2.533s CPU time. Feb 1 04:00:36 localhost podman[109737]: 2026-02-01 09:00:36.904580918 +0000 UTC m=+0.060611321 container died e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, version=17.1.13, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public) Feb 1 04:00:36 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.timer: Deactivated successfully. Feb 1 04:00:36 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257. Feb 1 04:00:36 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed to open /run/systemd/transient/e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: No such file or directory Feb 1 04:00:36 localhost systemd[1]: tmp-crun.Y3MMKR.mount: Deactivated successfully. Feb 1 04:00:37 localhost podman[109737]: 2026-02-01 09:00:37.053260057 +0000 UTC m=+0.209290450 container cleanup e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1766032510, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, container_name=ovn_controller) Feb 1 04:00:37 localhost podman[109737]: ovn_controller Feb 1 04:00:37 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.timer: Failed to open /run/systemd/transient/e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.timer: No such file or directory Feb 1 04:00:37 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed to open /run/systemd/transient/e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: No such file or directory Feb 1 04:00:37 localhost podman[109751]: 2026-02-01 09:00:37.066438488 +0000 UTC m=+0.150806276 container cleanup e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 1 04:00:37 localhost systemd[1]: libpod-conmon-e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.scope: Deactivated successfully. Feb 1 04:00:37 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.timer: Failed to open /run/systemd/transient/e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.timer: No such file or directory Feb 1 04:00:37 localhost systemd[1]: e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: Failed to open /run/systemd/transient/e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257.service: No such file or directory Feb 1 04:00:37 localhost podman[109764]: 2026-02-01 09:00:37.167089319 +0000 UTC m=+0.065151974 container cleanup e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, architecture=x86_64, io.buildah.version=1.41.5) Feb 1 04:00:37 localhost podman[109764]: ovn_controller Feb 1 04:00:37 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Feb 1 04:00:37 localhost systemd[1]: Stopped ovn_controller container. Feb 1 04:00:37 localhost systemd[1]: var-lib-containers-storage-overlay-ae6e92d81edd57130eba0dea91809d1be824b840176ebe669287b6264f5d2d37-merged.mount: Deactivated successfully. Feb 1 04:00:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257-userdata-shm.mount: Deactivated successfully. Feb 1 04:00:37 localhost python3.9[109867]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:00:39 localhost systemd[1]: Reloading. Feb 1 04:00:39 localhost systemd-sysv-generator[109893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:00:39 localhost systemd-rc-local-generator[109890]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:00:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:00:39 localhost systemd[1]: Stopping ovn_metadata_agent container... Feb 1 04:00:39 localhost systemd[1]: tmp-crun.A0W9P4.mount: Deactivated successfully. Feb 1 04:00:39 localhost systemd[1]: libpod-e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.scope: Deactivated successfully. Feb 1 04:00:39 localhost systemd[1]: libpod-e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.scope: Consumed 9.254s CPU time. Feb 1 04:00:39 localhost podman[109907]: 2026-02-01 09:00:39.558887478 +0000 UTC m=+0.244774057 container stop e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 1 04:00:39 localhost podman[109907]: 2026-02-01 09:00:39.588363297 +0000 UTC m=+0.274249886 container died e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 04:00:39 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.timer: Deactivated successfully. Feb 1 04:00:39 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06. Feb 1 04:00:39 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed to open /run/systemd/transient/e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: No such file or directory Feb 1 04:00:39 localhost podman[109907]: 2026-02-01 09:00:39.706763781 +0000 UTC m=+0.392650370 container cleanup e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 04:00:39 localhost podman[109907]: ovn_metadata_agent Feb 1 04:00:39 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.timer: Failed to open /run/systemd/transient/e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.timer: No such file or directory Feb 1 04:00:39 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed to open /run/systemd/transient/e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: No such file or directory Feb 1 04:00:39 localhost podman[109920]: 2026-02-01 09:00:39.731445321 +0000 UTC m=+0.156674359 container cleanup e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 1 04:00:39 localhost systemd[1]: libpod-conmon-e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.scope: Deactivated successfully. Feb 1 04:00:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41272 DF PROTO=TCP SPT=36132 DPT=9100 SEQ=2460966367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6480D4D0000000001030307) Feb 1 04:00:39 localhost podman[109950]: error opening file `/run/crun/e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06/status`: No such file or directory Feb 1 04:00:39 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.timer: Failed to open /run/systemd/transient/e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.timer: No such file or directory Feb 1 04:00:39 localhost systemd[1]: e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: Failed to open /run/systemd/transient/e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06.service: No such file or directory Feb 1 04:00:39 localhost podman[109937]: 2026-02-01 09:00:39.842154776 +0000 UTC m=+0.074536897 container cleanup e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 04:00:39 localhost podman[109937]: ovn_metadata_agent Feb 1 04:00:39 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully. Feb 1 04:00:39 localhost systemd[1]: Stopped ovn_metadata_agent container. Feb 1 04:00:40 localhost systemd[1]: var-lib-containers-storage-overlay-d506918155a93476a6405c9e2c98cb06d7e575d23557b96e2d10a36860f0cb4c-merged.mount: Deactivated successfully. Feb 1 04:00:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06-userdata-shm.mount: Deactivated successfully. Feb 1 04:00:40 localhost python3.9[110043]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:00:40 localhost systemd[1]: Reloading. Feb 1 04:00:40 localhost systemd-sysv-generator[110073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:00:40 localhost systemd-rc-local-generator[110067]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:00:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:00:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62776 DF PROTO=TCP SPT=49370 DPT=9101 SEQ=1282612434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6481A4D0000000001030307) Feb 1 04:00:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54946 DF PROTO=TCP SPT=41284 DPT=9882 SEQ=3822266166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648230D0000000001030307) Feb 1 04:00:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:00:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5433 writes, 23K keys, 5433 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5433 writes, 751 syncs, 7.23 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:00:46 localhost sshd[110096]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:00:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1261 DF PROTO=TCP SPT=43364 DPT=9102 SEQ=3186521073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648310D0000000001030307) Feb 1 04:00:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:00:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5223 writes, 23K keys, 5223 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5223 writes, 658 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:00:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41274 DF PROTO=TCP SPT=36132 DPT=9100 SEQ=2460966367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6483D0E0000000001030307) Feb 1 04:00:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62778 DF PROTO=TCP SPT=49370 DPT=9101 SEQ=1282612434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6484B0E0000000001030307) Feb 1 04:01:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51375 DF PROTO=TCP SPT=56472 DPT=9882 SEQ=2463887314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6485C4D0000000001030307) Feb 1 04:01:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51376 DF PROTO=TCP SPT=56472 DPT=9882 SEQ=2463887314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648604E0000000001030307) Feb 1 04:01:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51377 DF PROTO=TCP SPT=56472 DPT=9882 SEQ=2463887314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648684D0000000001030307) Feb 1 04:01:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1422 DF PROTO=TCP SPT=54214 DPT=9102 SEQ=492321943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64875CD0000000001030307) Feb 1 04:01:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2188 DF PROTO=TCP SPT=51398 DPT=9100 SEQ=3609510058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648824D0000000001030307) Feb 1 04:01:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11327 DF PROTO=TCP SPT=35098 DPT=9101 SEQ=1865113286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6488F8E0000000001030307) Feb 1 04:01:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51379 DF PROTO=TCP SPT=56472 DPT=9882 SEQ=2463887314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648990D0000000001030307) Feb 1 04:01:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1424 DF PROTO=TCP SPT=54214 DPT=9102 SEQ=492321943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648A50D0000000001030307) Feb 1 04:01:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2190 DF PROTO=TCP SPT=51398 DPT=9100 SEQ=3609510058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648B30D0000000001030307) Feb 1 04:01:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11329 DF PROTO=TCP SPT=35098 DPT=9101 SEQ=1865113286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648BF0D0000000001030307) Feb 1 04:01:28 localhost sshd[110171]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:01:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3236 DF PROTO=TCP SPT=60732 DPT=9882 SEQ=3897439414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648D17C0000000001030307) Feb 1 04:01:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3237 DF PROTO=TCP SPT=60732 DPT=9882 SEQ=3897439414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648D58E0000000001030307) Feb 1 04:01:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3238 DF PROTO=TCP SPT=60732 DPT=9882 SEQ=3897439414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648DD8D0000000001030307) Feb 1 04:01:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24227 DF PROTO=TCP SPT=53696 DPT=9102 SEQ=853338594 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648EB0D0000000001030307) Feb 1 04:01:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10533 DF PROTO=TCP SPT=56770 DPT=9100 SEQ=2294429146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA648F78D0000000001030307) Feb 1 04:01:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16210 DF PROTO=TCP SPT=53240 DPT=9101 SEQ=1019892307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64904CD0000000001030307) Feb 1 04:01:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3240 DF PROTO=TCP SPT=60732 DPT=9882 SEQ=3897439414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6490D0D0000000001030307) Feb 1 04:01:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24229 DF PROTO=TCP SPT=53696 DPT=9102 SEQ=853338594 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6491B0D0000000001030307) Feb 1 04:01:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10535 DF PROTO=TCP SPT=56770 DPT=9100 SEQ=2294429146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649270E0000000001030307) Feb 1 04:01:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16212 DF PROTO=TCP SPT=53240 DPT=9101 SEQ=1019892307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649350D0000000001030307) Feb 1 04:02:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25937 DF PROTO=TCP SPT=58880 DPT=9882 SEQ=1834266595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64946AD0000000001030307) Feb 1 04:02:01 localhost systemd[1]: session-36.scope: Deactivated successfully. Feb 1 04:02:01 localhost systemd[1]: session-36.scope: Consumed 18.168s CPU time. Feb 1 04:02:01 localhost systemd-logind[761]: Session 36 logged out. Waiting for processes to exit. Feb 1 04:02:01 localhost systemd-logind[761]: Removed session 36. Feb 1 04:02:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25938 DF PROTO=TCP SPT=58880 DPT=9882 SEQ=1834266595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6494ACE0000000001030307) Feb 1 04:02:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25939 DF PROTO=TCP SPT=58880 DPT=9882 SEQ=1834266595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64952CE0000000001030307) Feb 1 04:02:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30153 DF PROTO=TCP SPT=44568 DPT=9102 SEQ=2928275220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649604D0000000001030307) Feb 1 04:02:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16626 DF PROTO=TCP SPT=48068 DPT=9100 SEQ=500505732 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6496CCD0000000001030307) Feb 1 04:02:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12342 DF PROTO=TCP SPT=52708 DPT=9101 SEQ=2462273802 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64979CD0000000001030307) Feb 1 04:02:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25941 DF PROTO=TCP SPT=58880 DPT=9882 SEQ=1834266595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649830E0000000001030307) Feb 1 04:02:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30155 DF PROTO=TCP SPT=44568 DPT=9102 SEQ=2928275220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649910E0000000001030307) Feb 1 04:02:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16628 DF PROTO=TCP SPT=48068 DPT=9100 SEQ=500505732 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6499D0D0000000001030307) Feb 1 04:02:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12344 DF PROTO=TCP SPT=52708 DPT=9101 SEQ=2462273802 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649A90D0000000001030307) Feb 1 04:02:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25234 DF PROTO=TCP SPT=38550 DPT=9882 SEQ=290365811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649BDC50000000001030307) Feb 1 04:02:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25235 DF PROTO=TCP SPT=38550 DPT=9882 SEQ=290365811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649C1CD0000000001030307) Feb 1 04:02:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5317 DF PROTO=TCP SPT=33866 DPT=9105 SEQ=2217420361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649C90D0000000001030307) Feb 1 04:02:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47192 DF PROTO=TCP SPT=55370 DPT=9102 SEQ=929202725 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649D5A60000000001030307) Feb 1 04:02:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41193 DF PROTO=TCP SPT=55428 DPT=9100 SEQ=1921553124 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649E20D0000000001030307) Feb 1 04:02:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63908 DF PROTO=TCP SPT=56862 DPT=9101 SEQ=3142567501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649EF0E0000000001030307) Feb 1 04:02:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25238 DF PROTO=TCP SPT=38550 DPT=9882 SEQ=290365811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA649F90D0000000001030307) Feb 1 04:02:48 localhost sshd[110266]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:02:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47194 DF PROTO=TCP SPT=55370 DPT=9102 SEQ=929202725 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A050D0000000001030307) Feb 1 04:02:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41195 DF PROTO=TCP SPT=55428 DPT=9100 SEQ=1921553124 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A130D0000000001030307) Feb 1 04:02:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63910 DF PROTO=TCP SPT=56862 DPT=9101 SEQ=3142567501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A1F0D0000000001030307) Feb 1 04:03:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27631 DF PROTO=TCP SPT=38690 DPT=9882 SEQ=102954606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A310E0000000001030307) Feb 1 04:03:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27632 DF PROTO=TCP SPT=38690 DPT=9882 SEQ=102954606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A350E0000000001030307) Feb 1 04:03:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27633 DF PROTO=TCP SPT=38690 DPT=9882 SEQ=102954606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A3D0D0000000001030307) Feb 1 04:03:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39714 DF PROTO=TCP SPT=53930 DPT=9102 SEQ=2356406653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A4A8E0000000001030307) Feb 1 04:03:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23682 DF PROTO=TCP SPT=50332 DPT=9100 SEQ=2193449943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A570D0000000001030307) Feb 1 04:03:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42105 DF PROTO=TCP SPT=34016 DPT=9101 SEQ=2965222399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A644E0000000001030307) Feb 1 04:03:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27635 DF PROTO=TCP SPT=38690 DPT=9882 SEQ=102954606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A6D0D0000000001030307) Feb 1 04:03:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39716 DF PROTO=TCP SPT=53930 DPT=9102 SEQ=2356406653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A7B0D0000000001030307) Feb 1 04:03:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23684 DF PROTO=TCP SPT=50332 DPT=9100 SEQ=2193449943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A870D0000000001030307) Feb 1 04:03:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42107 DF PROTO=TCP SPT=34016 DPT=9101 SEQ=2965222399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64A950D0000000001030307) Feb 1 04:03:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23711 DF PROTO=TCP SPT=60510 DPT=9882 SEQ=4061105213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64AA63C0000000001030307) Feb 1 04:03:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23712 DF PROTO=TCP SPT=60510 DPT=9882 SEQ=4061105213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64AAA4D0000000001030307) Feb 1 04:03:31 localhost sshd[110310]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:03:31 localhost systemd-logind[761]: New session 37 of user zuul. Feb 1 04:03:31 localhost systemd[1]: Started Session 37 of User zuul. Feb 1 04:03:31 localhost python3.9[110429]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:32 localhost python3.9[110551]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:32 localhost python3.9[110658]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23713 DF PROTO=TCP SPT=60510 DPT=9882 SEQ=4061105213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64AB24D0000000001030307) Feb 1 04:03:33 localhost python3.9[110750]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:34 localhost python3.9[110842]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:34 localhost python3.9[110934]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:35 localhost python3.9[111026]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:35 localhost python3.9[111118]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:36 localhost python3.9[111210]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36929 DF PROTO=TCP SPT=44138 DPT=9102 SEQ=283164502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64ABFCD0000000001030307) Feb 1 04:03:37 localhost python3.9[111302]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:37 localhost python3.9[111394]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:38 localhost python3.9[111486]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:38 localhost python3.9[111578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:39 localhost python3.9[111670]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38530 DF PROTO=TCP SPT=49588 DPT=9100 SEQ=2192356510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64ACC4D0000000001030307) Feb 1 04:03:40 localhost python3.9[111762]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:40 localhost python3.9[111854]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:41 localhost python3.9[111946]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:41 localhost python3.9[112038]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:42 localhost python3.9[112130]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:42 localhost python3.9[112222]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7876 DF PROTO=TCP SPT=59982 DPT=9101 SEQ=769046800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64AD98D0000000001030307) Feb 1 04:03:43 localhost python3.9[112314]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:44 localhost python3.9[112406]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:45 localhost python3.9[112498]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23715 DF PROTO=TCP SPT=60510 DPT=9882 SEQ=4061105213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64AE30D0000000001030307) Feb 1 04:03:45 localhost python3.9[112590]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:46 localhost python3.9[112682]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:46 localhost python3.9[112774]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:47 localhost python3.9[112866]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:48 localhost python3.9[112958]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36931 DF PROTO=TCP SPT=44138 DPT=9102 SEQ=283164502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64AEF0D0000000001030307) Feb 1 04:03:48 localhost python3.9[113050]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:49 localhost python3.9[113142]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:49 localhost python3.9[113234]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:50 localhost python3.9[113326]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:51 localhost python3.9[113418]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:51 localhost python3.9[113510]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:52 localhost python3.9[113602]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38532 DF PROTO=TCP SPT=49588 DPT=9100 SEQ=2192356510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64AFD0E0000000001030307) Feb 1 04:03:52 localhost python3.9[113694]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:53 localhost python3.9[113786]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:54 localhost python3.9[113878]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:54 localhost python3.9[113970]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7878 DF PROTO=TCP SPT=59982 DPT=9101 SEQ=769046800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B090D0000000001030307) Feb 1 04:03:55 localhost python3.9[114062]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:55 localhost python3.9[114154]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:56 localhost python3.9[114246]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:03:57 localhost python3.9[114338]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:03:58 localhost python3.9[114430]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 1 04:03:59 localhost python3.9[114522]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:03:59 localhost systemd[1]: Reloading. Feb 1 04:03:59 localhost systemd-rc-local-generator[114546]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:03:59 localhost systemd-sysv-generator[114551]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:03:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:04:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12253 DF PROTO=TCP SPT=38286 DPT=9882 SEQ=3581918519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B1B6D0000000001030307) Feb 1 04:04:00 localhost python3.9[114649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12254 DF PROTO=TCP SPT=38286 DPT=9882 SEQ=3581918519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B1F8E0000000001030307) Feb 1 04:04:01 localhost python3.9[114742]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:01 localhost python3.9[114835]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:02 localhost python3.9[114928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:03 localhost python3.9[115021]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12255 DF PROTO=TCP SPT=38286 DPT=9882 SEQ=3581918519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B278E0000000001030307) Feb 1 04:04:03 localhost python3.9[115114]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:04 localhost python3.9[115207]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:04 localhost python3.9[115300]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:05 localhost sshd[115391]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:04:05 localhost python3.9[115395]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:05 localhost python3.9[115488]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42072 DF PROTO=TCP SPT=36072 DPT=9102 SEQ=2451486445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B350E0000000001030307) Feb 1 04:04:06 localhost python3.9[115581]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:07 localhost python3.9[115674]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:07 localhost python3.9[115767]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:08 localhost python3.9[115860]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:09 localhost python3.9[115953]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:09 localhost python3.9[116046]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55303 DF PROTO=TCP SPT=59064 DPT=9100 SEQ=4002704535 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B418D0000000001030307) Feb 1 04:04:10 localhost python3.9[116139]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:10 localhost python3.9[116232]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:11 localhost python3.9[116325]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:12 localhost python3.9[116418]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:12 localhost python3.9[116511]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45990 DF PROTO=TCP SPT=51644 DPT=9101 SEQ=74014858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B4E8D0000000001030307) Feb 1 04:04:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12257 DF PROTO=TCP SPT=38286 DPT=9882 SEQ=3581918519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B570D0000000001030307) Feb 1 04:04:16 localhost systemd[1]: session-37.scope: Deactivated successfully. Feb 1 04:04:16 localhost systemd[1]: session-37.scope: Consumed 29.424s CPU time. Feb 1 04:04:16 localhost systemd-logind[761]: Session 37 logged out. Waiting for processes to exit. Feb 1 04:04:16 localhost systemd-logind[761]: Removed session 37. Feb 1 04:04:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42074 DF PROTO=TCP SPT=36072 DPT=9102 SEQ=2451486445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B650D0000000001030307) Feb 1 04:04:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55305 DF PROTO=TCP SPT=59064 DPT=9100 SEQ=4002704535 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B710E0000000001030307) Feb 1 04:04:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45992 DF PROTO=TCP SPT=51644 DPT=9101 SEQ=74014858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B7F0D0000000001030307) Feb 1 04:04:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16590 DF PROTO=TCP SPT=34300 DPT=9882 SEQ=1886277604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B909D0000000001030307) Feb 1 04:04:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16591 DF PROTO=TCP SPT=34300 DPT=9882 SEQ=1886277604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B948D0000000001030307) Feb 1 04:04:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16592 DF PROTO=TCP SPT=34300 DPT=9882 SEQ=1886277604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64B9C8D0000000001030307) Feb 1 04:04:35 localhost sshd[116604]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:04:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3960 DF PROTO=TCP SPT=36150 DPT=9102 SEQ=4223255335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64BAA4D0000000001030307) Feb 1 04:04:37 localhost sshd[116606]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:04:37 localhost systemd-logind[761]: New session 38 of user zuul. Feb 1 04:04:37 localhost systemd[1]: Started Session 38 of User zuul. Feb 1 04:04:38 localhost python3.9[116699]: ansible-ansible.legacy.ping Invoked with data=pong Feb 1 04:04:39 localhost python3.9[116803]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:04:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58342 DF PROTO=TCP SPT=44692 DPT=9100 SEQ=2904528710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64BB6CD0000000001030307) Feb 1 04:04:40 localhost python3.9[116895]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:41 localhost python3.9[116988]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:04:42 localhost python3.9[117080]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:04:42 localhost python3.9[117172]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:04:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32535 DF PROTO=TCP SPT=43816 DPT=9101 SEQ=2743141383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64BC3CD0000000001030307) Feb 1 04:04:43 localhost python3.9[117245]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769936682.4372227-174-246883270507201/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:04:44 localhost python3.9[117337]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:04:45 localhost python3.9[117433]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:04:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16594 DF PROTO=TCP SPT=34300 DPT=9882 SEQ=1886277604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64BCD0D0000000001030307) Feb 1 04:04:46 localhost python3.9[117525]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:04:46 localhost python3.9[117615]: ansible-ansible.builtin.service_facts Invoked Feb 1 04:04:47 localhost network[117632]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:04:47 localhost network[117633]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:04:47 localhost network[117634]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:04:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:04:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3962 DF PROTO=TCP SPT=36150 DPT=9102 SEQ=4223255335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64BDB0D0000000001030307) Feb 1 04:04:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58344 DF PROTO=TCP SPT=44692 DPT=9100 SEQ=2904528710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64BE70D0000000001030307) Feb 1 04:04:52 localhost python3.9[117832]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:04:53 localhost python3.9[117922]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:04:54 localhost python3.9[118018]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:04:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32537 DF PROTO=TCP SPT=43816 DPT=9101 SEQ=2743141383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64BF30D0000000001030307) Feb 1 04:05:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56940 DF PROTO=TCP SPT=42620 DPT=9882 SEQ=2023601156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C05CE0000000001030307) Feb 1 04:05:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56941 DF PROTO=TCP SPT=42620 DPT=9882 SEQ=2023601156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C09CD0000000001030307) Feb 1 04:05:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56942 DF PROTO=TCP SPT=42620 DPT=9882 SEQ=2023601156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C11CE0000000001030307) Feb 1 04:05:03 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 1 04:05:03 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 1 04:05:03 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 1 04:05:03 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 1 04:05:03 localhost systemd[1]: Stopping sshd-keygen.target... Feb 1 04:05:03 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 04:05:03 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 04:05:03 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 04:05:03 localhost systemd[1]: Reached target sshd-keygen.target. Feb 1 04:05:03 localhost systemd[1]: Starting OpenSSH server daemon... Feb 1 04:05:03 localhost sshd[118061]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:05:03 localhost systemd[1]: Started OpenSSH server daemon. Feb 1 04:05:03 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 04:05:03 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 04:05:03 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 04:05:04 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 04:05:04 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 04:05:04 localhost systemd[1]: run-r0b77db35b6b54652bcf9ed79e522294c.service: Deactivated successfully. Feb 1 04:05:04 localhost systemd[1]: run-r7cf8884a8f1545099072d587b650fd91.service: Deactivated successfully. Feb 1 04:05:05 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 1 04:05:05 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 1 04:05:05 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 1 04:05:05 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 1 04:05:05 localhost systemd[1]: Stopping sshd-keygen.target... Feb 1 04:05:05 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 04:05:05 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 04:05:05 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 04:05:05 localhost systemd[1]: Reached target sshd-keygen.target. Feb 1 04:05:05 localhost systemd[1]: Starting OpenSSH server daemon... Feb 1 04:05:05 localhost sshd[118325]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:05:05 localhost systemd[1]: Started OpenSSH server daemon. Feb 1 04:05:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21264 DF PROTO=TCP SPT=56142 DPT=9102 SEQ=1489006701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C1F8E0000000001030307) Feb 1 04:05:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51670 DF PROTO=TCP SPT=58496 DPT=9100 SEQ=1825872090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C2BCD0000000001030307) Feb 1 04:05:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32158 DF PROTO=TCP SPT=38530 DPT=9101 SEQ=3852027846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C390E0000000001030307) Feb 1 04:05:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56944 DF PROTO=TCP SPT=42620 DPT=9882 SEQ=2023601156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C410D0000000001030307) Feb 1 04:05:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21266 DF PROTO=TCP SPT=56142 DPT=9102 SEQ=1489006701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C4F0D0000000001030307) Feb 1 04:05:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51672 DF PROTO=TCP SPT=58496 DPT=9100 SEQ=1825872090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C5B0D0000000001030307) Feb 1 04:05:25 localhost sshd[118434]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:05:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32160 DF PROTO=TCP SPT=38530 DPT=9101 SEQ=3852027846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C690E0000000001030307) Feb 1 04:05:27 localhost sshd[118436]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:05:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29287 DF PROTO=TCP SPT=55806 DPT=9882 SEQ=2753578153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C7AFC0000000001030307) Feb 1 04:05:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29288 DF PROTO=TCP SPT=55806 DPT=9882 SEQ=2753578153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C7F0D0000000001030307) Feb 1 04:05:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49132 DF PROTO=TCP SPT=53468 DPT=9105 SEQ=429852356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C870E0000000001030307) Feb 1 04:05:35 localhost systemd[1]: tmp-crun.985hQx.mount: Deactivated successfully. Feb 1 04:05:35 localhost podman[118567]: 2026-02-01 09:05:35.322399256 +0000 UTC m=+0.096449940 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:05:35 localhost podman[118567]: 2026-02-01 09:05:35.430680134 +0000 UTC m=+0.204730788 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1764794109, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:05:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1889 DF PROTO=TCP SPT=57058 DPT=9102 SEQ=104127017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64C948D0000000001030307) Feb 1 04:05:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23926 DF PROTO=TCP SPT=45922 DPT=9100 SEQ=368008998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64CA10D0000000001030307) Feb 1 04:05:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14825 DF PROTO=TCP SPT=36950 DPT=9101 SEQ=2660027807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64CAE4E0000000001030307) Feb 1 04:05:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29291 DF PROTO=TCP SPT=55806 DPT=9882 SEQ=2753578153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64CB70D0000000001030307) Feb 1 04:05:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1891 DF PROTO=TCP SPT=57058 DPT=9102 SEQ=104127017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64CC50E0000000001030307) Feb 1 04:05:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23928 DF PROTO=TCP SPT=45922 DPT=9100 SEQ=368008998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64CD10D0000000001030307) Feb 1 04:05:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14827 DF PROTO=TCP SPT=36950 DPT=9101 SEQ=2660027807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64CDF0D0000000001030307) Feb 1 04:06:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=514 DF PROTO=TCP SPT=47956 DPT=9882 SEQ=3399857180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64CF02D0000000001030307) Feb 1 04:06:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=515 DF PROTO=TCP SPT=47956 DPT=9882 SEQ=3399857180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64CF44E0000000001030307) Feb 1 04:06:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=516 DF PROTO=TCP SPT=47956 DPT=9882 SEQ=3399857180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64CFC4E0000000001030307) Feb 1 04:06:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11811 DF PROTO=TCP SPT=58740 DPT=9102 SEQ=3271871497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D09CD0000000001030307) Feb 1 04:06:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34807 DF PROTO=TCP SPT=54414 DPT=9100 SEQ=1532626364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D164E0000000001030307) Feb 1 04:06:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57432 DF PROTO=TCP SPT=52170 DPT=9101 SEQ=2258476920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D234E0000000001030307) Feb 1 04:06:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=518 DF PROTO=TCP SPT=47956 DPT=9882 SEQ=3399857180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D2D0D0000000001030307) Feb 1 04:06:16 localhost kernel: SELinux: Converting 2739 SID table entries... Feb 1 04:06:16 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 04:06:16 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 04:06:16 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 04:06:16 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 04:06:16 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 04:06:16 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 04:06:16 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 04:06:18 localhost sshd[119074]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:06:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11813 DF PROTO=TCP SPT=58740 DPT=9102 SEQ=3271871497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D390D0000000001030307) Feb 1 04:06:19 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=17 res=1 Feb 1 04:06:20 localhost python3.9[119153]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:06:20 localhost python3.9[119245]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:06:21 localhost python3.9[119318]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769936780.3313675-424-107940076111585/.source.fact _original_basename=._zpp9x14 follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:06:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34809 DF PROTO=TCP SPT=54414 DPT=9100 SEQ=1532626364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D470E0000000001030307) Feb 1 04:06:22 localhost python3.9[119408]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:06:23 localhost python3.9[119506]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:06:24 localhost python3.9[119560]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:06:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57434 DF PROTO=TCP SPT=52170 DPT=9101 SEQ=2258476920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D530D0000000001030307) Feb 1 04:06:27 localhost systemd[1]: Reloading. Feb 1 04:06:27 localhost systemd-rc-local-generator[119597]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:06:27 localhost systemd-sysv-generator[119600]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:06:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:06:28 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 04:06:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7941 DF PROTO=TCP SPT=57284 DPT=9882 SEQ=2716998411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D655C0000000001030307) Feb 1 04:06:30 localhost python3.9[119699]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:06:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7942 DF PROTO=TCP SPT=57284 DPT=9882 SEQ=2716998411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D694D0000000001030307) Feb 1 04:06:31 localhost python3.9[119938]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Feb 1 04:06:32 localhost python3.9[120030]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Feb 1 04:06:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7943 DF PROTO=TCP SPT=57284 DPT=9882 SEQ=2716998411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D714E0000000001030307) Feb 1 04:06:34 localhost python3.9[120123]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:06:34 localhost python3.9[120215]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Feb 1 04:06:36 localhost python3.9[120307]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:06:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47078 DF PROTO=TCP SPT=43750 DPT=9102 SEQ=3647426648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D7F0D0000000001030307) Feb 1 04:06:37 localhost python3.9[120399]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:06:37 localhost python3.9[120509]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769936796.7140834-747-260046596405157/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3730c7272422aae3617cbe6ef3938e59e92fe8bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:06:39 localhost python3.9[120640]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:06:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49117 DF PROTO=TCP SPT=44728 DPT=9100 SEQ=3852835967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D8B8E0000000001030307) Feb 1 04:06:40 localhost python3.9[120734]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Feb 1 04:06:41 localhost python3.9[120827]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Feb 1 04:06:41 localhost python3.9[120920]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Feb 1 04:06:42 localhost python3.9[121018]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Feb 1 04:06:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46249 DF PROTO=TCP SPT=59974 DPT=9101 SEQ=4245491564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64D988E0000000001030307) Feb 1 04:06:43 localhost python3.9[121110]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:06:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7945 DF PROTO=TCP SPT=57284 DPT=9882 SEQ=2716998411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64DA10E0000000001030307) Feb 1 04:06:46 localhost sshd[121113]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:06:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47080 DF PROTO=TCP SPT=43750 DPT=9102 SEQ=3647426648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64DAF0E0000000001030307) Feb 1 04:06:51 localhost python3.9[121206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:06:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49119 DF PROTO=TCP SPT=44728 DPT=9100 SEQ=3852835967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64DBB0D0000000001030307) Feb 1 04:06:52 localhost python3.9[121298]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:06:52 localhost python3.9[121371]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769936811.7612963-1021-12669778384064/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 04:06:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46251 DF PROTO=TCP SPT=59974 DPT=9101 SEQ=4245491564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64DC90E0000000001030307) Feb 1 04:06:58 localhost python3.9[121463]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:06:58 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 1 04:06:58 localhost systemd[1]: Stopped Load Kernel Modules. Feb 1 04:06:58 localhost systemd[1]: Stopping Load Kernel Modules... Feb 1 04:06:58 localhost systemd[1]: Starting Load Kernel Modules... Feb 1 04:06:58 localhost systemd-modules-load[121467]: Module 'msr' is built in Feb 1 04:06:58 localhost systemd[1]: Finished Load Kernel Modules. Feb 1 04:06:59 localhost python3.9[121560]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:06:59 localhost python3.9[121633]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769936818.611207-1090-125100225835693/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 04:07:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1286 DF PROTO=TCP SPT=52774 DPT=9882 SEQ=819140579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64DDA8D0000000001030307) Feb 1 04:07:00 localhost python3.9[121725]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:07:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1287 DF PROTO=TCP SPT=52774 DPT=9882 SEQ=819140579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64DDE8D0000000001030307) Feb 1 04:07:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1288 DF PROTO=TCP SPT=52774 DPT=9882 SEQ=819140579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64DE68D0000000001030307) Feb 1 04:07:05 localhost python3.9[121817]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:07:06 localhost python3.9[121909]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Feb 1 04:07:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1049 DF PROTO=TCP SPT=60934 DPT=9102 SEQ=3083332741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64DF40E0000000001030307) Feb 1 04:07:06 localhost python3.9[121999]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:07:07 localhost python3.9[122091]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:07:07 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Feb 1 04:07:08 localhost systemd[1]: tuned.service: Deactivated successfully. Feb 1 04:07:08 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Feb 1 04:07:08 localhost systemd[1]: tuned.service: Consumed 1.736s CPU time, no IO. Feb 1 04:07:08 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 1 04:07:09 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 1 04:07:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55600 DF PROTO=TCP SPT=38318 DPT=9100 SEQ=3734527054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E008D0000000001030307) Feb 1 04:07:10 localhost python3.9[122193]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Feb 1 04:07:11 localhost sshd[122208]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:07:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47596 DF PROTO=TCP SPT=59356 DPT=9101 SEQ=1807572945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E0DCD0000000001030307) Feb 1 04:07:13 localhost python3.9[122287]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:07:13 localhost systemd[1]: Reloading. Feb 1 04:07:13 localhost systemd-sysv-generator[122320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:07:13 localhost systemd-rc-local-generator[122315]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:07:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:07:14 localhost python3.9[122417]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:07:14 localhost systemd[1]: Reloading. Feb 1 04:07:14 localhost systemd-sysv-generator[122450]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:07:14 localhost systemd-rc-local-generator[122444]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:07:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:07:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1290 DF PROTO=TCP SPT=52774 DPT=9882 SEQ=819140579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E170D0000000001030307) Feb 1 04:07:16 localhost python3.9[122548]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:07:16 localhost python3.9[122641]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:07:16 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Feb 1 04:07:17 localhost python3.9[122734]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:07:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1051 DF PROTO=TCP SPT=60934 DPT=9102 SEQ=3083332741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E250E0000000001030307) Feb 1 04:07:19 localhost python3.9[122833]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:07:20 localhost python3.9[122926]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:07:20 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 1 04:07:20 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 1 04:07:20 localhost systemd[1]: Stopping Apply Kernel Variables... Feb 1 04:07:20 localhost systemd[1]: Starting Apply Kernel Variables... Feb 1 04:07:20 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 1 04:07:20 localhost systemd[1]: Finished Apply Kernel Variables. Feb 1 04:07:20 localhost systemd[1]: session-38.scope: Deactivated successfully. Feb 1 04:07:20 localhost systemd[1]: session-38.scope: Consumed 1min 56.046s CPU time. Feb 1 04:07:20 localhost systemd-logind[761]: Session 38 logged out. Waiting for processes to exit. Feb 1 04:07:20 localhost systemd-logind[761]: Removed session 38. Feb 1 04:07:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55602 DF PROTO=TCP SPT=38318 DPT=9100 SEQ=3734527054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E310E0000000001030307) Feb 1 04:07:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47598 DF PROTO=TCP SPT=59356 DPT=9101 SEQ=1807572945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E3D0D0000000001030307) Feb 1 04:07:25 localhost sshd[122946]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:07:25 localhost systemd-logind[761]: New session 39 of user zuul. Feb 1 04:07:25 localhost systemd[1]: Started Session 39 of User zuul. Feb 1 04:07:26 localhost python3.9[123039]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:07:27 localhost python3.9[123133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:07:29 localhost python3.9[123229]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:07:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9113 DF PROTO=TCP SPT=50094 DPT=9882 SEQ=3801450267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E4FBC0000000001030307) Feb 1 04:07:30 localhost python3.9[123320]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:07:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9114 DF PROTO=TCP SPT=50094 DPT=9882 SEQ=3801450267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E53CD0000000001030307) Feb 1 04:07:31 localhost python3.9[123416]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:07:32 localhost python3.9[123470]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:07:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9115 DF PROTO=TCP SPT=50094 DPT=9882 SEQ=3801450267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E5BCD0000000001030307) Feb 1 04:07:36 localhost python3.9[123564]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:07:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32025 DF PROTO=TCP SPT=50502 DPT=9102 SEQ=972267169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E694D0000000001030307) Feb 1 04:07:37 localhost python3.9[123711]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:07:38 localhost python3.9[123803]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:07:39 localhost python3.9[123959]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:07:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58560 DF PROTO=TCP SPT=58548 DPT=9100 SEQ=1039894359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E75CD0000000001030307) Feb 1 04:07:39 localhost python3.9[124019]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:07:40 localhost python3.9[124126]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:07:41 localhost python3.9[124199]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769936860.164123-320-202167324695930/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 04:07:42 localhost python3.9[124291]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 1 04:07:42 localhost python3.9[124383]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 1 04:07:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61490 DF PROTO=TCP SPT=41400 DPT=9101 SEQ=282903222 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E830D0000000001030307) Feb 1 04:07:43 localhost python3.9[124475]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 1 04:07:44 localhost python3.9[124567]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 1 04:07:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9117 DF PROTO=TCP SPT=50094 DPT=9882 SEQ=3801450267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E8B0D0000000001030307) Feb 1 04:07:45 localhost python3.9[124657]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:07:46 localhost python3.9[124751]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 1 04:07:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32027 DF PROTO=TCP SPT=50502 DPT=9102 SEQ=972267169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64E990D0000000001030307) Feb 1 04:07:50 localhost python3.9[124845]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 1 04:07:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58562 DF PROTO=TCP SPT=58548 DPT=9100 SEQ=1039894359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64EA50E0000000001030307) Feb 1 04:07:54 localhost python3.9[124939]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 1 04:07:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61492 DF PROTO=TCP SPT=41400 DPT=9101 SEQ=282903222 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64EB30D0000000001030307) Feb 1 04:07:58 localhost python3.9[125039]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 1 04:08:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20840 DF PROTO=TCP SPT=42994 DPT=9882 SEQ=3843553390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64EC4ED0000000001030307) Feb 1 04:08:00 localhost sshd[125042]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:08:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20841 DF PROTO=TCP SPT=42994 DPT=9882 SEQ=3843553390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64EC90D0000000001030307) Feb 1 04:08:02 localhost python3.9[125135]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 1 04:08:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20842 DF PROTO=TCP SPT=42994 DPT=9882 SEQ=3843553390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64ED10D0000000001030307) Feb 1 04:08:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43412 DF PROTO=TCP SPT=33952 DPT=9102 SEQ=2493936316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64EDE8D0000000001030307) Feb 1 04:08:06 localhost python3.9[125229]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 1 04:08:08 localhost sshd[125232]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:08:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3615 DF PROTO=TCP SPT=50860 DPT=9100 SEQ=3681374626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64EEB0D0000000001030307) Feb 1 04:08:10 localhost python3.9[125325]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 1 04:08:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36417 DF PROTO=TCP SPT=49922 DPT=9101 SEQ=2955079836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64EF80D0000000001030307) Feb 1 04:08:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20844 DF PROTO=TCP SPT=42994 DPT=9882 SEQ=3843553390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F010D0000000001030307) Feb 1 04:08:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43414 DF PROTO=TCP SPT=33952 DPT=9102 SEQ=2493936316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F0F0D0000000001030307) Feb 1 04:08:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3617 DF PROTO=TCP SPT=50860 DPT=9100 SEQ=3681374626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F1B0E0000000001030307) Feb 1 04:08:23 localhost python3.9[125493]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:08:24 localhost python3.9[125598]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:08:25 localhost python3.9[125671]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769936904.05408-725-8276830412287/.source.json _original_basename=._dg90376 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:08:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36419 DF PROTO=TCP SPT=49922 DPT=9101 SEQ=2955079836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F290D0000000001030307) Feb 1 04:08:26 localhost python3.9[125763]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 1 04:08:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51976 DF PROTO=TCP SPT=54358 DPT=9882 SEQ=3336363457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F3A1D0000000001030307) Feb 1 04:08:30 localhost systemd-journald[47940]: Field hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation. Feb 1 04:08:30 localhost systemd-journald[47940]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 04:08:30 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:08:30 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:08:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51977 DF PROTO=TCP SPT=54358 DPT=9882 SEQ=3336363457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F3E0D0000000001030307) Feb 1 04:08:31 localhost podman[125775]: 2026-02-01 09:08:26.196099361 +0000 UTC m=+0.039095555 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Feb 1 04:08:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51978 DF PROTO=TCP SPT=54358 DPT=9882 SEQ=3336363457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F460D0000000001030307) Feb 1 04:08:33 localhost python3.9[125976]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 1 04:08:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51559 DF PROTO=TCP SPT=42946 DPT=9102 SEQ=2849249556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F53CD0000000001030307) Feb 1 04:08:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58046 DF PROTO=TCP SPT=59062 DPT=9100 SEQ=2362106755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F604D0000000001030307) Feb 1 04:08:41 localhost podman[125991]: 2026-02-01 09:08:33.497796694 +0000 UTC m=+0.021282301 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 1 04:08:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36420 DF PROTO=TCP SPT=49922 DPT=9101 SEQ=2955079836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F690D0000000001030307) Feb 1 04:08:42 localhost python3.9[126269]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 1 04:08:44 localhost podman[126281]: 2026-02-01 09:08:42.678041594 +0000 UTC m=+0.047298129 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Feb 1 04:08:45 localhost python3.9[126444]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 1 04:08:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51980 DF PROTO=TCP SPT=54358 DPT=9882 SEQ=3336363457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F770D0000000001030307) Feb 1 04:08:46 localhost podman[126457]: 2026-02-01 09:08:45.537695802 +0000 UTC m=+0.055149394 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:08:48 localhost python3.9[126622]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 1 04:08:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51561 DF PROTO=TCP SPT=42946 DPT=9102 SEQ=2849249556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F830D0000000001030307) Feb 1 04:08:50 localhost sshd[126662]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:08:51 localhost podman[126636]: 2026-02-01 09:08:48.602813549 +0000 UTC m=+0.044055049 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Feb 1 04:08:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58048 DF PROTO=TCP SPT=59062 DPT=9100 SEQ=2362106755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F910D0000000001030307) Feb 1 04:08:52 localhost python3.9[126816]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 1 04:08:54 localhost podman[126828]: 2026-02-01 09:08:52.8694345 +0000 UTC m=+0.039621071 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Feb 1 04:08:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29482 DF PROTO=TCP SPT=37232 DPT=9101 SEQ=3755765723 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64F9D0D0000000001030307) Feb 1 04:08:56 localhost systemd[1]: session-39.scope: Deactivated successfully. Feb 1 04:08:56 localhost systemd[1]: session-39.scope: Consumed 1min 28.140s CPU time. Feb 1 04:08:56 localhost systemd-logind[761]: Session 39 logged out. Waiting for processes to exit. Feb 1 04:08:56 localhost systemd-logind[761]: Removed session 39. Feb 1 04:09:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33598 DF PROTO=TCP SPT=56150 DPT=9882 SEQ=2685807276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64FAF4D0000000001030307) Feb 1 04:09:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33599 DF PROTO=TCP SPT=56150 DPT=9882 SEQ=2685807276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64FB34D0000000001030307) Feb 1 04:09:01 localhost sshd[126935]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:09:01 localhost systemd-logind[761]: New session 40 of user zuul. Feb 1 04:09:01 localhost systemd[1]: Started Session 40 of User zuul. Feb 1 04:09:02 localhost python3.9[127028]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:09:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33600 DF PROTO=TCP SPT=56150 DPT=9882 SEQ=2685807276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64FBB4D0000000001030307) Feb 1 04:09:04 localhost python3.9[127181]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Feb 1 04:09:05 localhost python3.9[127274]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:09:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24183 DF PROTO=TCP SPT=38264 DPT=9102 SEQ=2378727652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64FC8CE0000000001030307) Feb 1 04:09:06 localhost python3.9[127328]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 1 04:09:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42797 DF PROTO=TCP SPT=50738 DPT=9100 SEQ=80647556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64FD54D0000000001030307) Feb 1 04:09:11 localhost python3.9[127621]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:09:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10121 DF PROTO=TCP SPT=41238 DPT=9101 SEQ=3118595880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64FE28E0000000001030307) Feb 1 04:09:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33602 DF PROTO=TCP SPT=56150 DPT=9882 SEQ=2685807276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64FEB0D0000000001030307) Feb 1 04:09:16 localhost python3.9[127715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 1 04:09:18 localhost python3.9[127808]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:09:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24185 DF PROTO=TCP SPT=38264 DPT=9102 SEQ=2378727652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA64FF90D0000000001030307) Feb 1 04:09:19 localhost python3.9[127900]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Feb 1 04:09:21 localhost kernel: SELinux: Converting 2741 SID table entries... Feb 1 04:09:21 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 04:09:21 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 04:09:21 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 04:09:21 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 04:09:21 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 04:09:21 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 04:09:21 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 04:09:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42799 DF PROTO=TCP SPT=50738 DPT=9100 SEQ=80647556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650050E0000000001030307) Feb 1 04:09:25 localhost python3.9[127996]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:09:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10123 DF PROTO=TCP SPT=41238 DPT=9101 SEQ=3118595880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650130D0000000001030307) Feb 1 04:09:25 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=18 res=1 Feb 1 04:09:26 localhost python3.9[128094]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:09:29 localhost sshd[128097]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:09:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49239 DF PROTO=TCP SPT=48082 DPT=9882 SEQ=2390964984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650247C0000000001030307) Feb 1 04:09:30 localhost python3.9[128190]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:09:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49240 DF PROTO=TCP SPT=48082 DPT=9882 SEQ=2390964984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6502A810000000001030307) Feb 1 04:09:32 localhost python3.9[128435]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None Feb 1 04:09:33 localhost python3.9[128525]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:09:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53840 DF PROTO=TCP SPT=60910 DPT=9105 SEQ=2581934111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650310D0000000001030307) Feb 1 04:09:34 localhost python3.9[128619]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:09:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33349 DF PROTO=TCP SPT=60830 DPT=9102 SEQ=1861018260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6503E0E0000000001030307) Feb 1 04:09:38 localhost python3.9[128713]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:09:38 localhost sshd[128715]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:09:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48745 DF PROTO=TCP SPT=33710 DPT=9100 SEQ=962843596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6504A8D0000000001030307) Feb 1 04:09:42 localhost python3.9[128809]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 1 04:09:42 localhost systemd[1]: Reloading. Feb 1 04:09:42 localhost systemd-rc-local-generator[128865]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:09:42 localhost systemd-sysv-generator[128868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:09:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:09:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39016 DF PROTO=TCP SPT=38034 DPT=9101 SEQ=30187069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65057CD0000000001030307) Feb 1 04:09:44 localhost python3.9[129003]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:09:45 localhost python3.9[129095]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:09:45 localhost python3.9[129189]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:09:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49243 DF PROTO=TCP SPT=48082 DPT=9882 SEQ=2390964984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650630D0000000001030307) Feb 1 04:09:46 localhost python3.9[129281]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:09:47 localhost python3.9[129388]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:09:48 localhost python3.9[129461]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769936986.9668894-560-141302530002294/.source _original_basename=.k56qqdux follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:09:48 localhost python3.9[129553]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:09:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33351 DF PROTO=TCP SPT=60830 DPT=9102 SEQ=1861018260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6506F0D0000000001030307) Feb 1 04:09:49 localhost python3.9[129645]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Feb 1 04:09:50 localhost python3.9[129737]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:09:51 localhost python3.9[129829]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:09:51 localhost python3.9[129902]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769936990.8912172-686-230972531386785/.source.yaml _original_basename=.xhys5tp5 follow=False checksum=4c28d1662755c608a6ffaa942e27a2488c0a78a3 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:09:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48747 DF PROTO=TCP SPT=33710 DPT=9100 SEQ=962843596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6507B0D0000000001030307) Feb 1 04:09:52 localhost python3.9[129994]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Feb 1 04:09:54 localhost ansible-async_wrapper.py[130100]: Invoked with j107560936814 300 /home/zuul/.ansible/tmp/ansible-tmp-1769936993.2521608-758-269011292602205/AnsiballZ_edpm_os_net_config.py _ Feb 1 04:09:54 localhost ansible-async_wrapper.py[130103]: Starting module and watcher Feb 1 04:09:54 localhost ansible-async_wrapper.py[130103]: Start watching 130104 (300) Feb 1 04:09:54 localhost ansible-async_wrapper.py[130104]: Start module (130104) Feb 1 04:09:54 localhost ansible-async_wrapper.py[130100]: Return async_wrapper task started. Feb 1 04:09:54 localhost python3.9[130105]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Feb 1 04:09:54 localhost ansible-async_wrapper.py[130104]: Module complete (130104) Feb 1 04:09:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39018 DF PROTO=TCP SPT=38034 DPT=9101 SEQ=30187069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650870D0000000001030307) Feb 1 04:09:57 localhost python3.9[130209]: ansible-ansible.legacy.async_status Invoked with jid=j107560936814.130100 mode=status _async_dir=/root/.ansible_async Feb 1 04:09:58 localhost python3.9[130268]: ansible-ansible.legacy.async_status Invoked with jid=j107560936814.130100 mode=cleanup _async_dir=/root/.ansible_async Feb 1 04:09:59 localhost ansible-async_wrapper.py[130103]: Done in kid B. Feb 1 04:09:59 localhost python3.9[130360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:09:59 localhost python3.9[130433]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769936998.6342192-824-114211570957403/.source.returncode _original_basename=.gichcccz follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:10:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47624 DF PROTO=TCP SPT=53862 DPT=9882 SEQ=2007175808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65099AD0000000001030307) Feb 1 04:10:00 localhost python3.9[130525]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:10:00 localhost python3.9[130598]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769936999.9405298-873-245219907110846/.source.cfg _original_basename=.stgtcv9p follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:10:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47625 DF PROTO=TCP SPT=53862 DPT=9882 SEQ=2007175808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6509DCE0000000001030307) Feb 1 04:10:01 localhost python3.9[130690]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:10:01 localhost systemd[1]: Reloading Network Manager... Feb 1 04:10:01 localhost NetworkManager[5972]: [1769937001.8844] audit: op="reload" arg="0" pid=130694 uid=0 result="success" Feb 1 04:10:01 localhost NetworkManager[5972]: [1769937001.8851] config: signal: SIGHUP (no changes from disk) Feb 1 04:10:01 localhost systemd[1]: Reloaded Network Manager. Feb 1 04:10:02 localhost systemd[1]: session-40.scope: Deactivated successfully. Feb 1 04:10:02 localhost systemd[1]: session-40.scope: Consumed 35.353s CPU time. Feb 1 04:10:02 localhost systemd-logind[761]: Session 40 logged out. Waiting for processes to exit. Feb 1 04:10:02 localhost systemd-logind[761]: Removed session 40. Feb 1 04:10:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47626 DF PROTO=TCP SPT=53862 DPT=9882 SEQ=2007175808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650A5CD0000000001030307) Feb 1 04:10:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30302 DF PROTO=TCP SPT=49992 DPT=9102 SEQ=2081088930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650B34D0000000001030307) Feb 1 04:10:07 localhost sshd[130709]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:10:07 localhost systemd-logind[761]: New session 41 of user zuul. Feb 1 04:10:07 localhost systemd[1]: Started Session 41 of User zuul. Feb 1 04:10:08 localhost python3.9[130802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:10:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62741 DF PROTO=TCP SPT=35366 DPT=9100 SEQ=29216029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650BFCE0000000001030307) Feb 1 04:10:09 localhost python3.9[130896]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:10:11 localhost python3.9[131041]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:10:11 localhost systemd[1]: session-41.scope: Deactivated successfully. Feb 1 04:10:11 localhost systemd[1]: session-41.scope: Consumed 1.944s CPU time. Feb 1 04:10:11 localhost systemd-logind[761]: Session 41 logged out. Waiting for processes to exit. Feb 1 04:10:11 localhost systemd-logind[761]: Removed session 41. Feb 1 04:10:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26417 DF PROTO=TCP SPT=35542 DPT=9101 SEQ=3011396480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650CCCD0000000001030307) Feb 1 04:10:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47628 DF PROTO=TCP SPT=53862 DPT=9882 SEQ=2007175808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650D50D0000000001030307) Feb 1 04:10:17 localhost sshd[131057]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:10:17 localhost systemd-logind[761]: New session 42 of user zuul. Feb 1 04:10:17 localhost systemd[1]: Started Session 42 of User zuul. Feb 1 04:10:18 localhost python3.9[131150]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:10:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30304 DF PROTO=TCP SPT=49992 DPT=9102 SEQ=2081088930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650E30D0000000001030307) Feb 1 04:10:19 localhost python3.9[131244]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:10:20 localhost python3.9[131340]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:10:21 localhost python3.9[131394]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:10:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62743 DF PROTO=TCP SPT=35366 DPT=9100 SEQ=29216029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650EF0E0000000001030307) Feb 1 04:10:24 localhost sshd[131411]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:10:25 localhost python3.9[131490]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:10:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26419 DF PROTO=TCP SPT=35542 DPT=9101 SEQ=3011396480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA650FD0D0000000001030307) Feb 1 04:10:26 localhost python3.9[131637]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:10:27 localhost python3.9[131729]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:10:28 localhost python3.9[131833]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:10:28 localhost python3.9[131881]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:10:29 localhost python3.9[131973]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:10:29 localhost python3.9[132021]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:10:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36375 DF PROTO=TCP SPT=43050 DPT=9882 SEQ=2122265823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6510EDC0000000001030307) Feb 1 04:10:30 localhost python3.9[132113]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 1 04:10:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36376 DF PROTO=TCP SPT=43050 DPT=9882 SEQ=2122265823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65112CE0000000001030307) Feb 1 04:10:31 localhost python3.9[132205]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 1 04:10:31 localhost python3.9[132297]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 1 04:10:32 localhost python3.9[132389]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 1 04:10:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36377 DF PROTO=TCP SPT=43050 DPT=9882 SEQ=2122265823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6511ACD0000000001030307) Feb 1 04:10:33 localhost python3.9[132481]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:10:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7917 DF PROTO=TCP SPT=54122 DPT=9102 SEQ=2889597690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651288D0000000001030307) Feb 1 04:10:37 localhost python3.9[132575]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:10:38 localhost python3.9[132669]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:10:39 localhost python3.9[132761]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:10:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17746 DF PROTO=TCP SPT=49134 DPT=9100 SEQ=892466814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651350D0000000001030307) Feb 1 04:10:40 localhost python3.9[132853]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:10:41 localhost python3.9[132946]: ansible-service_facts Invoked Feb 1 04:10:41 localhost network[132963]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:10:41 localhost network[132964]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:10:41 localhost network[132965]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:10:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:10:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3934 DF PROTO=TCP SPT=50832 DPT=9101 SEQ=1103186030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651420D0000000001030307) Feb 1 04:10:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36379 DF PROTO=TCP SPT=43050 DPT=9882 SEQ=2122265823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6514B0D0000000001030307) Feb 1 04:10:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:10:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5433 writes, 23K keys, 5433 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5433 writes, 751 syncs, 7.23 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:10:47 localhost python3.9[133348]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:10:48 localhost sshd[133381]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:10:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7919 DF PROTO=TCP SPT=54122 DPT=9102 SEQ=2889597690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651590E0000000001030307) Feb 1 04:10:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:10:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5223 writes, 23K keys, 5223 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5223 writes, 658 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:10:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17748 DF PROTO=TCP SPT=49134 DPT=9100 SEQ=892466814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651650D0000000001030307) Feb 1 04:10:52 localhost python3.9[133508]: ansible-package_facts Invoked with manager=['auto'] strategy=first Feb 1 04:10:54 localhost python3.9[133600]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:10:54 localhost python3.9[133675]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937053.6857133-654-257954298262455/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:10:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3936 DF PROTO=TCP SPT=50832 DPT=9101 SEQ=1103186030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651730D0000000001030307) Feb 1 04:10:55 localhost python3.9[133769]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:10:56 localhost python3.9[133844]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937055.346552-699-202321833157605/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:10:58 localhost python3.9[133938]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:10:59 localhost python3.9[134032]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:11:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48920 DF PROTO=TCP SPT=42058 DPT=9882 SEQ=3624156792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651840C0000000001030307) Feb 1 04:11:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48921 DF PROTO=TCP SPT=42058 DPT=9882 SEQ=3624156792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651880D0000000001030307) Feb 1 04:11:01 localhost python3.9[134086]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:11:02 localhost python3.9[134180]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:11:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48922 DF PROTO=TCP SPT=42058 DPT=9882 SEQ=3624156792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651900D0000000001030307) Feb 1 04:11:03 localhost python3.9[134234]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:11:03 localhost chronyd[25933]: chronyd exiting Feb 1 04:11:03 localhost systemd[1]: Stopping NTP client/server... Feb 1 04:11:03 localhost systemd[1]: chronyd.service: Deactivated successfully. Feb 1 04:11:03 localhost systemd[1]: Stopped NTP client/server. Feb 1 04:11:03 localhost systemd[1]: Starting NTP client/server... Feb 1 04:11:03 localhost chronyd[134242]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 1 04:11:03 localhost chronyd[134242]: Frequency -30.790 +/- 0.505 ppm read from /var/lib/chrony/drift Feb 1 04:11:03 localhost chronyd[134242]: Loaded seccomp filter (level 2) Feb 1 04:11:03 localhost systemd[1]: Started NTP client/server. Feb 1 04:11:04 localhost systemd[1]: session-42.scope: Deactivated successfully. Feb 1 04:11:04 localhost systemd[1]: session-42.scope: Consumed 27.517s CPU time. Feb 1 04:11:04 localhost systemd-logind[761]: Session 42 logged out. Waiting for processes to exit. Feb 1 04:11:04 localhost systemd-logind[761]: Removed session 42. Feb 1 04:11:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51770 DF PROTO=TCP SPT=44996 DPT=9102 SEQ=2616082020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6519D8D0000000001030307) Feb 1 04:11:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8569 DF PROTO=TCP SPT=42590 DPT=9100 SEQ=4037927389 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651AA0D0000000001030307) Feb 1 04:11:09 localhost sshd[134258]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:11:10 localhost systemd-logind[761]: New session 43 of user zuul. Feb 1 04:11:10 localhost systemd[1]: Started Session 43 of User zuul. Feb 1 04:11:10 localhost sshd[134352]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:11:11 localhost python3.9[134351]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:11:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3937 DF PROTO=TCP SPT=50832 DPT=9101 SEQ=1103186030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651B30D0000000001030307) Feb 1 04:11:12 localhost python3.9[134449]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:13 localhost python3.9[134555]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:13 localhost python3.9[134603]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.xmvua4r2 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:15 localhost python3.9[134695]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48924 DF PROTO=TCP SPT=42058 DPT=9882 SEQ=3624156792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651C10D0000000001030307) Feb 1 04:11:15 localhost python3.9[134770]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937074.5351927-140-265104662460362/.source _original_basename=.08twytty follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:16 localhost python3.9[134862]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:11:16 localhost auditd[727]: Audit daemon rotating log files Feb 1 04:11:17 localhost python3.9[134954]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:17 localhost python3.9[135027]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937076.7347267-213-197154267585525/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 04:11:18 localhost python3.9[135119]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51772 DF PROTO=TCP SPT=44996 DPT=9102 SEQ=2616082020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651CD0D0000000001030307) Feb 1 04:11:18 localhost python3.9[135192]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937077.8888273-213-276460387304321/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 04:11:19 localhost python3.9[135284]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:20 localhost python3.9[135376]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:21 localhost python3.9[135449]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937080.0196364-323-96863578069649/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:21 localhost python3.9[135541]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8571 DF PROTO=TCP SPT=42590 DPT=9100 SEQ=4037927389 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651DB0D0000000001030307) Feb 1 04:11:22 localhost python3.9[135614]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937081.3297405-368-7600610106807/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:23 localhost python3.9[135706]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:11:23 localhost systemd[1]: Reloading. Feb 1 04:11:23 localhost systemd-rc-local-generator[135727]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:11:23 localhost systemd-sysv-generator[135732]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:11:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:11:23 localhost systemd[1]: Reloading. Feb 1 04:11:23 localhost systemd-rc-local-generator[135769]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:11:23 localhost systemd-sysv-generator[135772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:11:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:11:24 localhost systemd[1]: Starting EDPM Container Shutdown... Feb 1 04:11:24 localhost systemd[1]: Finished EDPM Container Shutdown. Feb 1 04:11:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30269 DF PROTO=TCP SPT=33402 DPT=9101 SEQ=2116717942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651E70D0000000001030307) Feb 1 04:11:26 localhost python3.9[135875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:26 localhost python3.9[135948]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937085.5569978-438-185206347603244/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:27 localhost python3.9[136040]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:27 localhost python3.9[136113]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937086.8911963-482-10817982704719/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:28 localhost python3.9[136205]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:11:28 localhost systemd[1]: Reloading. Feb 1 04:11:28 localhost systemd-rc-local-generator[136226]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:11:28 localhost systemd-sysv-generator[136234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:11:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:11:29 localhost systemd[1]: Starting Create netns directory... Feb 1 04:11:29 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 04:11:29 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 04:11:29 localhost systemd[1]: Finished Create netns directory. Feb 1 04:11:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36903 DF PROTO=TCP SPT=51088 DPT=9882 SEQ=53325184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651F93C0000000001030307) Feb 1 04:11:30 localhost python3.9[136337]: ansible-ansible.builtin.service_facts Invoked Feb 1 04:11:30 localhost network[136354]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:11:30 localhost network[136355]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:11:30 localhost network[136356]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:11:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36904 DF PROTO=TCP SPT=51088 DPT=9882 SEQ=53325184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA651FD4D0000000001030307) Feb 1 04:11:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:11:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36905 DF PROTO=TCP SPT=51088 DPT=9882 SEQ=53325184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652054E0000000001030307) Feb 1 04:11:36 localhost python3.9[136558]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44201 DF PROTO=TCP SPT=38496 DPT=9102 SEQ=86592264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65212CD0000000001030307) Feb 1 04:11:36 localhost python3.9[136633]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937095.7038262-606-275940723110126/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:37 localhost python3.9[136726]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:11:37 localhost systemd[1]: Reloading OpenSSH server daemon... Feb 1 04:11:37 localhost systemd[1]: Reloaded OpenSSH server daemon. Feb 1 04:11:37 localhost sshd[118325]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:11:38 localhost python3.9[136822]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:39 localhost python3.9[136914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28144 DF PROTO=TCP SPT=55938 DPT=9100 SEQ=2373809073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6521F4D0000000001030307) Feb 1 04:11:39 localhost python3.9[136987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937098.8293376-699-2733109829542/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:40 localhost python3.9[137079]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Feb 1 04:11:40 localhost systemd[1]: Starting Time & Date Service... Feb 1 04:11:41 localhost systemd[1]: Started Time & Date Service. Feb 1 04:11:41 localhost python3.9[137175]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:42 localhost python3.9[137267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18998 DF PROTO=TCP SPT=46222 DPT=9101 SEQ=1641614223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6522C8D0000000001030307) Feb 1 04:11:43 localhost python3.9[137340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937102.1820183-804-173387805585602/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:44 localhost python3.9[137432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:44 localhost python3.9[137505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937103.5909505-849-137157334630153/.source.yaml _original_basename=.pngzmdvh follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:45 localhost python3.9[137597]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36907 DF PROTO=TCP SPT=51088 DPT=9882 SEQ=53325184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652350E0000000001030307) Feb 1 04:11:45 localhost python3.9[137672]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937104.8498-894-263492183878018/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:46 localhost python3.9[137764]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:11:47 localhost python3.9[137857]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:11:48 localhost python3[137950]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 1 04:11:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44203 DF PROTO=TCP SPT=38496 DPT=9102 SEQ=86592264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652430D0000000001030307) Feb 1 04:11:48 localhost python3.9[138042]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:49 localhost python3.9[138115]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937108.4793015-1011-134968220902261/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:50 localhost python3.9[138207]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:50 localhost python3.9[138280]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937109.689764-1055-269122162516297/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:51 localhost python3.9[138372]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28146 DF PROTO=TCP SPT=55938 DPT=9100 SEQ=2373809073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6524F0E0000000001030307) Feb 1 04:11:52 localhost python3.9[138475]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937111.0106928-1101-171140909163157/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:52 localhost python3.9[138600]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:53 localhost python3.9[138703]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937112.2959337-1146-203979935508217/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:53 localhost podman[138759]: Feb 1 04:11:53 localhost podman[138759]: 2026-02-01 09:11:53.635539988 +0000 UTC m=+0.075998008 container create baeb742de6e520c8b143cad4dadc1e6a9bb0cb0cc7401ac683cd4c78a5ae4fd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_babbage, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, version=7, RELEASE=main) Feb 1 04:11:53 localhost systemd[1]: Started libpod-conmon-baeb742de6e520c8b143cad4dadc1e6a9bb0cb0cc7401ac683cd4c78a5ae4fd5.scope. Feb 1 04:11:53 localhost podman[138759]: 2026-02-01 09:11:53.604587204 +0000 UTC m=+0.045045244 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:11:53 localhost systemd[1]: Started libcrun container. Feb 1 04:11:53 localhost podman[138759]: 2026-02-01 09:11:53.72422636 +0000 UTC m=+0.164684380 container init baeb742de6e520c8b143cad4dadc1e6a9bb0cb0cc7401ac683cd4c78a5ae4fd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_babbage, vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, description=Red Hat Ceph Storage 7, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True) Feb 1 04:11:53 localhost systemd[1]: tmp-crun.YrZCSm.mount: Deactivated successfully. Feb 1 04:11:53 localhost podman[138759]: 2026-02-01 09:11:53.742948353 +0000 UTC m=+0.183406373 container start baeb742de6e520c8b143cad4dadc1e6a9bb0cb0cc7401ac683cd4c78a5ae4fd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_babbage, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z) Feb 1 04:11:53 localhost podman[138759]: 2026-02-01 09:11:53.743249693 +0000 UTC m=+0.183707703 container attach baeb742de6e520c8b143cad4dadc1e6a9bb0cb0cc7401ac683cd4c78a5ae4fd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_babbage, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, release=1764794109, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git) Feb 1 04:11:53 localhost unruffled_babbage[138774]: 167 167 Feb 1 04:11:53 localhost systemd[1]: libpod-baeb742de6e520c8b143cad4dadc1e6a9bb0cb0cc7401ac683cd4c78a5ae4fd5.scope: Deactivated successfully. Feb 1 04:11:53 localhost podman[138759]: 2026-02-01 09:11:53.747147244 +0000 UTC m=+0.187605324 container died baeb742de6e520c8b143cad4dadc1e6a9bb0cb0cc7401ac683cd4c78a5ae4fd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_babbage, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, release=1764794109, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z) Feb 1 04:11:53 localhost podman[138787]: 2026-02-01 09:11:53.838453778 +0000 UTC m=+0.082388958 container remove baeb742de6e520c8b143cad4dadc1e6a9bb0cb0cc7401ac683cd4c78a5ae4fd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_babbage, ceph=True, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109) Feb 1 04:11:53 localhost systemd[1]: libpod-conmon-baeb742de6e520c8b143cad4dadc1e6a9bb0cb0cc7401ac683cd4c78a5ae4fd5.scope: Deactivated successfully. Feb 1 04:11:54 localhost podman[138839]: Feb 1 04:11:54 localhost podman[138839]: 2026-02-01 09:11:54.016538264 +0000 UTC m=+0.064813520 container create 6548419d4fd22b49339a05eccf46b4b6a8e325728635f39642afba162af72c7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_bohr, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, release=1764794109, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, ceph=True, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container) Feb 1 04:11:54 localhost systemd[1]: Started libpod-conmon-6548419d4fd22b49339a05eccf46b4b6a8e325728635f39642afba162af72c7e.scope. Feb 1 04:11:54 localhost systemd[1]: Started libcrun container. Feb 1 04:11:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aa27e443812a62a9bed344742514e4e00f14d0322874282f0c00e4424e2740/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 04:11:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aa27e443812a62a9bed344742514e4e00f14d0322874282f0c00e4424e2740/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 04:11:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aa27e443812a62a9bed344742514e4e00f14d0322874282f0c00e4424e2740/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 04:11:54 localhost podman[138839]: 2026-02-01 09:11:54.076628746 +0000 UTC m=+0.124904022 container init 6548419d4fd22b49339a05eccf46b4b6a8e325728635f39642afba162af72c7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_bohr, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, vendor=Red Hat, Inc., release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, architecture=x86_64, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux ) Feb 1 04:11:54 localhost podman[138839]: 2026-02-01 09:11:54.084298615 +0000 UTC m=+0.132573861 container start 6548419d4fd22b49339a05eccf46b4b6a8e325728635f39642afba162af72c7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_bohr, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.41.4, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, version=7, release=1764794109, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git) Feb 1 04:11:54 localhost podman[138839]: 2026-02-01 09:11:54.084434449 +0000 UTC m=+0.132709695 container attach 6548419d4fd22b49339a05eccf46b4b6a8e325728635f39642afba162af72c7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_bohr, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, vcs-type=git) Feb 1 04:11:54 localhost podman[138839]: 2026-02-01 09:11:53.988246083 +0000 UTC m=+0.036521399 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:11:54 localhost python3.9[138900]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:11:54 localhost systemd[1]: var-lib-containers-storage-overlay-5b76bd61099ee1a55f4022a929660daf3f366b904d62f5d9c0252ee609c694f2-merged.mount: Deactivated successfully. Feb 1 04:11:54 localhost friendly_bohr[138877]: [ Feb 1 04:11:54 localhost friendly_bohr[138877]: { Feb 1 04:11:54 localhost friendly_bohr[138877]: "available": false, Feb 1 04:11:54 localhost friendly_bohr[138877]: "ceph_device": false, Feb 1 04:11:54 localhost friendly_bohr[138877]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 1 04:11:54 localhost friendly_bohr[138877]: "lsm_data": {}, Feb 1 04:11:54 localhost friendly_bohr[138877]: "lvs": [], Feb 1 04:11:54 localhost friendly_bohr[138877]: "path": "/dev/sr0", Feb 1 04:11:54 localhost friendly_bohr[138877]: "rejected_reasons": [ Feb 1 04:11:54 localhost friendly_bohr[138877]: "Insufficient space (<5GB)", Feb 1 04:11:54 localhost friendly_bohr[138877]: "Has a FileSystem" Feb 1 04:11:54 localhost friendly_bohr[138877]: ], Feb 1 04:11:54 localhost friendly_bohr[138877]: "sys_api": { Feb 1 04:11:54 localhost friendly_bohr[138877]: "actuators": null, Feb 1 04:11:54 localhost friendly_bohr[138877]: "device_nodes": "sr0", Feb 1 04:11:54 localhost friendly_bohr[138877]: "human_readable_size": "482.00 KB", Feb 1 04:11:54 localhost friendly_bohr[138877]: "id_bus": "ata", Feb 1 04:11:54 localhost friendly_bohr[138877]: "model": "QEMU DVD-ROM", Feb 1 04:11:54 localhost friendly_bohr[138877]: "nr_requests": "2", Feb 1 04:11:54 localhost friendly_bohr[138877]: "partitions": {}, Feb 1 04:11:54 localhost friendly_bohr[138877]: "path": "/dev/sr0", Feb 1 04:11:54 localhost friendly_bohr[138877]: "removable": "1", Feb 1 04:11:54 localhost friendly_bohr[138877]: "rev": "2.5+", Feb 1 04:11:54 localhost friendly_bohr[138877]: "ro": "0", Feb 1 04:11:54 localhost friendly_bohr[138877]: "rotational": "1", Feb 1 04:11:54 localhost friendly_bohr[138877]: "sas_address": "", Feb 1 04:11:54 localhost friendly_bohr[138877]: "sas_device_handle": "", Feb 1 04:11:54 localhost friendly_bohr[138877]: "scheduler_mode": "mq-deadline", Feb 1 04:11:54 localhost friendly_bohr[138877]: "sectors": 0, Feb 1 04:11:54 localhost friendly_bohr[138877]: "sectorsize": "2048", Feb 1 04:11:54 localhost friendly_bohr[138877]: "size": 493568.0, Feb 1 04:11:54 localhost friendly_bohr[138877]: "support_discard": "0", Feb 1 04:11:54 localhost friendly_bohr[138877]: "type": "disk", Feb 1 04:11:54 localhost friendly_bohr[138877]: "vendor": "QEMU" Feb 1 04:11:54 localhost friendly_bohr[138877]: } Feb 1 04:11:54 localhost friendly_bohr[138877]: } Feb 1 04:11:54 localhost friendly_bohr[138877]: ] Feb 1 04:11:54 localhost python3.9[139391]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937113.7721438-1190-170711815420544/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:54 localhost systemd[1]: libpod-6548419d4fd22b49339a05eccf46b4b6a8e325728635f39642afba162af72c7e.scope: Deactivated successfully. Feb 1 04:11:54 localhost podman[140475]: 2026-02-01 09:11:54.936849748 +0000 UTC m=+0.037392605 container died 6548419d4fd22b49339a05eccf46b4b6a8e325728635f39642afba162af72c7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_bohr, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, build-date=2025-12-08T17:28:53Z, version=7, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:11:54 localhost systemd[1]: tmp-crun.sWxAkX.mount: Deactivated successfully. Feb 1 04:11:54 localhost systemd[1]: var-lib-containers-storage-overlay-81aa27e443812a62a9bed344742514e4e00f14d0322874282f0c00e4424e2740-merged.mount: Deactivated successfully. Feb 1 04:11:54 localhost podman[140475]: 2026-02-01 09:11:54.970143035 +0000 UTC m=+0.070685872 container remove 6548419d4fd22b49339a05eccf46b4b6a8e325728635f39642afba162af72c7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_bohr, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.41.4) Feb 1 04:11:54 localhost systemd[1]: libpod-conmon-6548419d4fd22b49339a05eccf46b4b6a8e325728635f39642afba162af72c7e.scope: Deactivated successfully. Feb 1 04:11:55 localhost python3.9[140597]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:56 localhost python3.9[140689]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:11:57 localhost python3.9[140784]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:58 localhost sshd[140878]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:11:58 localhost python3.9[140877]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:58 localhost python3.9[140971]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:11:59 localhost python3.9[141063]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Feb 1 04:12:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60998 DF PROTO=TCP SPT=41384 DPT=9882 SEQ=3312263370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6526E6C0000000001030307) Feb 1 04:12:00 localhost python3.9[141156]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Feb 1 04:12:00 localhost systemd[1]: session-43.scope: Deactivated successfully. Feb 1 04:12:00 localhost systemd[1]: session-43.scope: Consumed 28.102s CPU time. Feb 1 04:12:00 localhost systemd-logind[761]: Session 43 logged out. Waiting for processes to exit. Feb 1 04:12:00 localhost systemd-logind[761]: Removed session 43. Feb 1 04:12:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25635 DF PROTO=TCP SPT=38470 DPT=9105 SEQ=944910504 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65274760000000001030307) Feb 1 04:12:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36908 DF PROTO=TCP SPT=51088 DPT=9882 SEQ=53325184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652750E0000000001030307) Feb 1 04:12:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6068 DF PROTO=TCP SPT=41644 DPT=9102 SEQ=2525634233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6527C010000000001030307) Feb 1 04:12:06 localhost sshd[141172]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:12:06 localhost systemd-logind[761]: New session 44 of user zuul. Feb 1 04:12:06 localhost systemd[1]: Started Session 44 of User zuul. Feb 1 04:12:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59541 DF PROTO=TCP SPT=43214 DPT=9100 SEQ=2497500456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652887C0000000001030307) Feb 1 04:12:07 localhost python3.9[141267]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Feb 1 04:12:08 localhost python3.9[141359]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:12:09 localhost sshd[141409]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:12:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26511 DF PROTO=TCP SPT=42952 DPT=9101 SEQ=4126678113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652959F0000000001030307) Feb 1 04:12:10 localhost python3.9[141455]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Feb 1 04:12:11 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 1 04:12:11 localhost python3.9[141549]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible._lgttopg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:12:12 localhost python3.9[141624]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible._lgttopg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937131.2500932-192-19814776055426/.source._lgttopg _original_basename=.fxs1vthu follow=False checksum=b6259656501c187ae53f530254d9fd01725b4ecf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:12:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30271 DF PROTO=TCP SPT=33402 DPT=9101 SEQ=2116717942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652A50D0000000001030307) Feb 1 04:12:14 localhost python3.9[141716]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:12:16 localhost python3.9[141808]: ansible-ansible.builtin.blockinfile Invoked with block=np0005604212.localdomain,192.168.122.106,np0005604212* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx/MKX//74FswFkw1c1lfM5mahSRoD4B8bhCZSm2/IQ//syuq+Qpi1sEoMv/N1mOrU8atXNtYkVNozl/ypDe2YJkUS8OTt37bT9A7XnBlfFSc5OwXS7VGHpVWbiMbImJibSV7HjoQP0yA8SvCJCcrI3Eh14+cna8tT1rJ9lOFRHvxLfG52XnzFiNUVDU+TG3uRtWEjY5epI8j/U73tEqdP4OAk7ZQ9riN1nllCCIs9FOErOEw14VW+151TbOCzcm9kvzeQMit9jPXTGqmTPKoidZFLhJwEAXq4M9+DFfKQWkVSqfcU3cvPz6S03lUcpPWiJxgGZiIPXxCdRjvI3bKCm898lFYwZq8EfdAwUFMyhmz4GHSyhMwqZWE46cikXf/skoSrEF8ji3NjmyQL7T304iKenZca6rHDI56veO0+PTzZj/pBiaWBWXlqF0WQLAn804z3yapsLNuR8R4EaREmk1Tc2ESg1//73pCUypwEMQWESHsAJ/LCHhyqNHY6Bjc=#012np0005604212.localdomain,192.168.122.106,np0005604212* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE8ydwus/1P6AnrixkRz4PJNoZXio9ATjx1wpGE9aUxy#012np0005604212.localdomain,192.168.122.106,np0005604212* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHuZj1kjh43u8MkLoV7TID8opUYqcB9nbV+TEcV1Khgm9NhSBcQeUlB5GJecVMFtUp1FQn9l3Oxy0aNJL0spiWE=#012np0005604210.localdomain,192.168.122.104,np0005604210* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeVlqpmEgZX6yoZkE7SzVbEM6MqJe/9qDZPPgFZPb/N85k+uB3cINsoq0pMJYeKjcKY8H56WyuNkVVwVHaouZnJCN4p1rCJmATIDieU8QMDwGucQpbrNRrQWheWQDkmHNIPOxnUDCRgEzDfYiaE4prLHMPKtf8XJAKUKVd6lpZrVSCovGz0UC3U1Le/0N1PJOi4kYEuipVrcfoYHC63A32I+w+7tybU8Rpknhc/UHhdn39PBGuAhbkSf2JEJbLLzLaPkZXT6HOPiBUT9jWKnymCGEcfPjIWOkeelx3fkPoXZCtnYHlSoQSkCVsUmXgHNj7X3+6sJi9+iV/+8jRWQyk6aCC+HjXDhSwxbBUaM9AOimJ9EK7vo8/IK9pQ3gNsEct6rHuvGytACNMWpaT5sRRaVEnS8uz/PL8urB6+59GYGunjAaw8lCQcxw+VNVJaLtj+BpVJZA2EA6XE4fwq7v0s9u0ApIMSyV3DcYzIcDFlT11I5g3RM8vZNipXfnub3U=#012np0005604210.localdomain,192.168.122.104,np0005604210* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOlN+6Wna5zexGzaC7+fSuZYqptFJJzfc4fNurRaPmwC#012np0005604210.localdomain,192.168.122.104,np0005604210* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMf1D0EfcBESlFDd0NV4yvsDLeyI7zSTGShGHjV17TDeMwOZQ9X97P3K+p+QICvUvg8AXGXxFhArHCUmm+iJ0Q8=#012np0005604209.localdomain,192.168.122.103,np0005604209* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAdXF2/8XBq3bWgr/9swIkzjlkm7PzpC1vdYXglaExGeIUwK5n05/HLobUMrYjOh6yE81+tctBT51wuPLw9qOGf4X3lRx3x0AHUqWSs00OL5nZsMRAd6PknZVyeCWf9jv13mVWIExCYbP8e4VK4M3w1m2xSLFd1aHtGkEUYJKCmacxrxFu2opq+kNCclpMC0BlFeSeX/NZeGwcfVCEyP46JVB9pNDo6D4s98FzzQNtG4DTv8NqE0S8Fj44dajq/80IKXeVEbhVmBikwFGMMEHhsRass2m0Q0rBw1Cv2jqW9hrTO1AWHY2aNDDqr6cKttP27XKfc/unDFFDb0mcc/HRa8JAUYEvuO0FIV6n28+Q5hWoYHAZfMU15U/bQPN1UxbF/MmSIZWvwY+vzCJ+icSJ9qfhDfbd1DttRuV0F3Jdi0jq01TyyPdOz8qT7kKSftD3Awn6BNLlseR8MaOTS+YF4fOnSP/xzj0B+nx/nr5Mrq8+QzKb2YyqdMfWWMGdCw8=#012np0005604209.localdomain,192.168.122.103,np0005604209* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEyfGu/WqJIvC6oouYQjgcrJPk9Bg07JDIkt1JPKTeA0#012np0005604209.localdomain,192.168.122.103,np0005604209* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH4jrW0M0jOqWvBkwMTs5aJ7MoUwB68xLOHVc4M2y1jfTW9cs2+E3JaFwH6xJLpPXRNwbxblwTFdTeLzxwq3Nwk=#012np0005604211.localdomain,192.168.122.105,np0005604211* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQ5JUOdiESLpaYomijw3u9LxHN4VxpmenW9EczyVvVdofuEESAIR1Q8BIVkW7gxgVyrzHxOpbaoAS+aZaKazruu7/chC8MkDw1lvfeyQwMZax6UziUan2wIFVTaCc7kITOHrdWkJm+OIvCs/ImtkSgsTmvTiQedvs86ME3gHNyA+7taoDXnH6UCB6d5ex6PzwXsKI03iUVWFfsGP3ZU7r52IBwgrLG+VplbaPBRNNP/RvKULVsokG3UCMd3pjHv3VYBdXPYTFOPf666ZEuxEz+Frz43oXzEhr4W61RN70cAFJDDFoOmBDxXzZqrmF7r1vSV3ojl+aHaVLCGL4Wnjrp9wl5Zq8XCGN/7ttzaZKrjj/flccfBEiYL9odgqp92EjmxsRqG4bFq/nEzS/DTJ88QQVpGQNC2T6bElJVdBIrpZAyv7n5HlwNQwfsltQtzbqe1E32azZb1wq13ajV9Ii7QrVd81nGYFM79NqiVVbXs5NypsJOMQ6ZoqyHK5+yyHk=#012np0005604211.localdomain,192.168.122.105,np0005604211* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILEIcduNL0DMEDOErXXJ0uk29DlGSUk7f/QOEFebs4e8#012np0005604211.localdomain,192.168.122.105,np0005604211* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCGGOVXInAZnlCFh3rgVH6nrUWtitrkOeovDtC1WeeR/gHrJ+susCZPN3v3pAe5flAEf/hpjySdS/u1PmS0N8Ho=#012np0005604215.localdomain,192.168.122.108,np0005604215* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/jKlZ/vxfazmNjpekfENGpQi8TTD6ErYy0BH9P8CRIiiKVdA/53XGSAQlY17b4tT5hzyHsUuXDmbv5R98FSy/Fi8F4KrjgogVPhd/zYoMrffr9ydwv+ih2mIyCPjZC+N92i92gM2OBHBXj5vqyh5yl1t4H1LhFab7P/m42K75mcTytGvGTLKXZbcs/1Ot/APGrs5wqg/c9XFQtgBEn6ttSKQ9caqbgUw88VGRkzaHvzheQvtIjZL0AwigTS24tqFx+bF+liSnSaYk1R8TKe1yMNODv5OCUmFYvPqls4Y3AQkpuroQQXHcQCe0QPuz9nGgPebNOxyTHsK66oDWIUskoYIbrZZhjDxlpdzJ+POEU/jXtGox0/0wlpRK7jNN6r4Fzx6uIzxB5SWn/UJ4BYS853pUsC32TeD0pZXfUAzOGUOzQfvYkUCElyRi8zDN4ubwEWnxvCEPaAFihafbviqQwLNFFmth36owDHV2zU/Q/BtW8vrwfx0cPr2A4WvQvp8=#012np0005604215.localdomain,192.168.122.108,np0005604215* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAqxjQs+R8e7wYqi9vXJigqVZC1H7cyvu0Lob0wgHHpY#012np0005604215.localdomain,192.168.122.108,np0005604215* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBpZtz+gA3A28TfIAE9+rHy4sghRBF4nh1U9zBwiez8FWMv0OjVQriiYnYh6sbsEW0tK+yZBRm7xEpd3W14ioec=#012np0005604213.localdomain,192.168.122.107,np0005604213* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhh44DuXnO4hBZJvT1vLnO8ZhT8GKLkBI0M+Q/lXSbHymnCyNerLMqVRhTb5ZUw07lkP6FtBJS95SUtdJuAbUi4jphShtJfBdicoa+uGqI1icHUQCbtCAACtas0lGeGi5q/q1LfzeuKh+LTRj60W+r2OZoChKxeSWYBQ8gIScKe1HgVCJVEESXwNv4CBs6ffOWVYHE+3JDUA3AN3nX931xw4oLMBkwi0q4sNh9Sb0oS79OX+dKdlGfnPLLWKF9QrLrHYdHVkKtPre9d1BdNkl38gRE45uwrAAxXBfeZjbzzfbUlWb54SZwL8P2ej29L5VAbE/97j1HD6+kUZ5wFb6v9oJyFwq8udFDqO1SUMkW4t1VmwD5G4rIU2+u0yHd4H7//fgbf8WAhPv1Qx5tXEqB6LIHqYCz7RekNQO5Xv8ge/gVMzzlxB0DJP6a4DJ8E0/Djnyzw81L2fmyeriPLqt/n/wHscNr1RRI4T1X2iINRwk5QfrxwTEHhJ00FY1kB90=#012np0005604213.localdomain,192.168.122.107,np0005604213* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHpQ8q5SipY+Tg88mzREiMhmtuvQNv/rHiJfQhVqjy49#012np0005604213.localdomain,192.168.122.107,np0005604213* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM6lbWtwCks630IMm3N6slgTXAS2/BDd/gLT/86gsZQSUwulBMm6OKfJ9eje+B7RGiNR4je3u2+SDaZwwywpAos=#012 create=True mode=0644 path=/tmp/ansible._lgttopg state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:12:17 localhost python3.9[141900]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._lgttopg' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:12:18 localhost python3.9[141994]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible._lgttopg state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:12:19 localhost systemd[1]: session-44.scope: Deactivated successfully. Feb 1 04:12:19 localhost systemd[1]: session-44.scope: Consumed 4.276s CPU time. Feb 1 04:12:19 localhost systemd-logind[761]: Session 44 logged out. Waiting for processes to exit. Feb 1 04:12:19 localhost systemd-logind[761]: Removed session 44. Feb 1 04:12:25 localhost sshd[142009]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:12:25 localhost systemd-logind[761]: New session 45 of user zuul. Feb 1 04:12:25 localhost systemd[1]: Started Session 45 of User zuul. Feb 1 04:12:26 localhost python3.9[142102]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:12:27 localhost python3.9[142198]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 1 04:12:28 localhost python3.9[142292]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:12:29 localhost python3.9[142385]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:12:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16520 DF PROTO=TCP SPT=60794 DPT=9882 SEQ=2947710787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652E39D0000000001030307) Feb 1 04:12:30 localhost python3.9[142478]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:12:31 localhost python3.9[142572]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:12:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2590 DF PROTO=TCP SPT=58390 DPT=9105 SEQ=166841420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652E9A80000000001030307) Feb 1 04:12:31 localhost python3.9[142667]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:12:32 localhost systemd[1]: session-45.scope: Deactivated successfully. Feb 1 04:12:32 localhost systemd[1]: session-45.scope: Consumed 3.870s CPU time. Feb 1 04:12:32 localhost systemd-logind[761]: Session 45 logged out. Waiting for processes to exit. Feb 1 04:12:32 localhost systemd-logind[761]: Removed session 45. Feb 1 04:12:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2591 DF PROTO=TCP SPT=58390 DPT=9105 SEQ=166841420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652EDCD0000000001030307) Feb 1 04:12:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22570 DF PROTO=TCP SPT=45074 DPT=9102 SEQ=1475244689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652F1310000000001030307) Feb 1 04:12:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22571 DF PROTO=TCP SPT=45074 DPT=9102 SEQ=1475244689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652F54E0000000001030307) Feb 1 04:12:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2592 DF PROTO=TCP SPT=58390 DPT=9105 SEQ=166841420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652F5CD0000000001030307) Feb 1 04:12:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22572 DF PROTO=TCP SPT=45074 DPT=9102 SEQ=1475244689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652FD4D0000000001030307) Feb 1 04:12:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20465 DF PROTO=TCP SPT=59290 DPT=9100 SEQ=1529206010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA652FDAC0000000001030307) Feb 1 04:12:37 localhost sshd[142682]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:12:37 localhost systemd-logind[761]: New session 46 of user zuul. Feb 1 04:12:37 localhost systemd[1]: Started Session 46 of User zuul. Feb 1 04:12:38 localhost python3.9[142775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:12:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20467 DF PROTO=TCP SPT=59290 DPT=9100 SEQ=1529206010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65309CD0000000001030307) Feb 1 04:12:40 localhost python3.9[142871]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:12:41 localhost python3.9[142925]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 1 04:12:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8392 DF PROTO=TCP SPT=53800 DPT=9101 SEQ=2319487304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65316CD0000000001030307) Feb 1 04:12:45 localhost python3.9[143017]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:12:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2594 DF PROTO=TCP SPT=58390 DPT=9105 SEQ=166841420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653250D0000000001030307) Feb 1 04:12:46 localhost python3.9[143110]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:12:47 localhost sshd[143203]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:12:47 localhost python3.9[143202]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:12:48 localhost python3.9[143296]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:12:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22574 DF PROTO=TCP SPT=45074 DPT=9102 SEQ=1475244689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6532D0E0000000001030307) Feb 1 04:12:49 localhost python3.9[143386]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 1 04:12:49 localhost python3.9[143476]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:12:50 localhost python3.9[143568]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:12:51 localhost systemd-logind[761]: Session 46 logged out. Waiting for processes to exit. Feb 1 04:12:51 localhost systemd[1]: session-46.scope: Deactivated successfully. Feb 1 04:12:51 localhost systemd[1]: session-46.scope: Consumed 8.836s CPU time. Feb 1 04:12:51 localhost systemd-logind[761]: Removed session 46. Feb 1 04:12:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20469 DF PROTO=TCP SPT=59290 DPT=9100 SEQ=1529206010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653390D0000000001030307) Feb 1 04:12:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8394 DF PROTO=TCP SPT=53800 DPT=9101 SEQ=2319487304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653470D0000000001030307) Feb 1 04:12:55 localhost sshd[143585]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:12:55 localhost systemd-logind[761]: New session 47 of user zuul. Feb 1 04:12:55 localhost systemd[1]: Started Session 47 of User zuul. Feb 1 04:12:56 localhost python3.9[143740]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:12:59 localhost python3.9[143851]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46837 DF PROTO=TCP SPT=44422 DPT=9882 SEQ=2102494178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65358CC0000000001030307) Feb 1 04:13:00 localhost python3.9[143943]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:00 localhost python3.9[144016]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937179.3799179-177-225984705522519/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3730c7272422aae3617cbe6ef3938e59e92fe8bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46838 DF PROTO=TCP SPT=44422 DPT=9882 SEQ=2102494178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6535CCD0000000001030307) Feb 1 04:13:01 localhost python3.9[144108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:02 localhost python3.9[144200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:02 localhost python3.9[144273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937181.6586397-252-203254520046019/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3730c7272422aae3617cbe6ef3938e59e92fe8bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46839 DF PROTO=TCP SPT=44422 DPT=9882 SEQ=2102494178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65364CD0000000001030307) Feb 1 04:13:03 localhost python3.9[144365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:03 localhost python3.9[144457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:04 localhost python3.9[144530]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937183.529224-328-163416117337822/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3730c7272422aae3617cbe6ef3938e59e92fe8bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:05 localhost python3.9[144622]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:05 localhost python3.9[144714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:06 localhost python3.9[144787]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937185.384328-394-190542600162643/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3730c7272422aae3617cbe6ef3938e59e92fe8bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54095 DF PROTO=TCP SPT=44992 DPT=9102 SEQ=122241056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653724D0000000001030307) Feb 1 04:13:07 localhost python3.9[144879]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:07 localhost python3.9[144971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:08 localhost python3.9[145044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937187.3565273-463-222454447581263/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3730c7272422aae3617cbe6ef3938e59e92fe8bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:09 localhost python3.9[145136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:09 localhost python3.9[145228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60149 DF PROTO=TCP SPT=36410 DPT=9100 SEQ=3909379442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6537ECD0000000001030307) Feb 1 04:13:10 localhost python3.9[145301]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937189.2451031-538-211459311670180/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3730c7272422aae3617cbe6ef3938e59e92fe8bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:10 localhost python3.9[145393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:11 localhost python3.9[145485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:12 localhost python3.9[145558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937191.12612-613-184721366672374/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3730c7272422aae3617cbe6ef3938e59e92fe8bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:13 localhost python3.9[145650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31986 DF PROTO=TCP SPT=35728 DPT=9101 SEQ=1042277212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6538C0E0000000001030307) Feb 1 04:13:13 localhost chronyd[134242]: Selected source 216.232.132.95 (pool.ntp.org) Feb 1 04:13:13 localhost python3.9[145742]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:14 localhost python3.9[145815]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937193.2733457-687-95803049889184/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3730c7272422aae3617cbe6ef3938e59e92fe8bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:14 localhost systemd-logind[761]: Session 47 logged out. Waiting for processes to exit. Feb 1 04:13:14 localhost systemd[1]: session-47.scope: Deactivated successfully. Feb 1 04:13:14 localhost systemd[1]: session-47.scope: Consumed 11.520s CPU time. Feb 1 04:13:14 localhost systemd-logind[761]: Removed session 47. Feb 1 04:13:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46841 DF PROTO=TCP SPT=44422 DPT=9882 SEQ=2102494178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653950D0000000001030307) Feb 1 04:13:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54097 DF PROTO=TCP SPT=44992 DPT=9102 SEQ=122241056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653A30E0000000001030307) Feb 1 04:13:20 localhost sshd[145830]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:13:20 localhost systemd-logind[761]: New session 48 of user zuul. Feb 1 04:13:20 localhost systemd[1]: Started Session 48 of User zuul. Feb 1 04:13:21 localhost python3.9[145925]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60151 DF PROTO=TCP SPT=36410 DPT=9100 SEQ=3909379442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653AF0E0000000001030307) Feb 1 04:13:22 localhost python3.9[146017]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:23 localhost python3.9[146090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937201.867174-60-158141258812831/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=814f759dcc97f4b50c85badaa6f3819c2533c70a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:23 localhost python3.9[146182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:24 localhost python3.9[146255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937203.310061-60-124996377668773/.source.conf _original_basename=ceph.conf follow=False checksum=6c8f40813464a566eca7252d9e693fc8375e148c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:24 localhost systemd[1]: session-48.scope: Deactivated successfully. Feb 1 04:13:24 localhost systemd[1]: session-48.scope: Consumed 2.274s CPU time. Feb 1 04:13:24 localhost systemd-logind[761]: Session 48 logged out. Waiting for processes to exit. Feb 1 04:13:24 localhost systemd-logind[761]: Removed session 48. Feb 1 04:13:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31988 DF PROTO=TCP SPT=35728 DPT=9101 SEQ=1042277212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653BD0D0000000001030307) Feb 1 04:13:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59474 DF PROTO=TCP SPT=54120 DPT=9882 SEQ=2905514156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653CDFD0000000001030307) Feb 1 04:13:30 localhost sshd[146270]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:13:30 localhost systemd-logind[761]: New session 49 of user zuul. Feb 1 04:13:30 localhost systemd[1]: Started Session 49 of User zuul. Feb 1 04:13:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59475 DF PROTO=TCP SPT=54120 DPT=9882 SEQ=2905514156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653D20D0000000001030307) Feb 1 04:13:31 localhost python3.9[146363]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:13:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59476 DF PROTO=TCP SPT=54120 DPT=9882 SEQ=2905514156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653DA0E0000000001030307) Feb 1 04:13:33 localhost sshd[146460]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:13:33 localhost python3.9[146459]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:34 localhost python3.9[146553]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:13:35 localhost python3.9[146643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:13:36 localhost python3.9[146735]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Feb 1 04:13:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25548 DF PROTO=TCP SPT=37740 DPT=9102 SEQ=3613580980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653E78E0000000001030307) Feb 1 04:13:37 localhost python3.9[146827]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:13:38 localhost python3.9[146881]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:13:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43228 DF PROTO=TCP SPT=36978 DPT=9100 SEQ=3856211447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653F40D0000000001030307) Feb 1 04:13:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31989 DF PROTO=TCP SPT=35728 DPT=9101 SEQ=1042277212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA653FD0D0000000001030307) Feb 1 04:13:42 localhost python3.9[146975]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 1 04:13:44 localhost python3[147070]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Feb 1 04:13:45 localhost python3.9[147162]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59478 DF PROTO=TCP SPT=54120 DPT=9882 SEQ=2905514156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6540B0D0000000001030307) Feb 1 04:13:46 localhost python3.9[147254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:46 localhost python3.9[147302]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:47 localhost python3.9[147394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:48 localhost python3.9[147442]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._7bmvx44 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:48 localhost python3.9[147534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25550 DF PROTO=TCP SPT=37740 DPT=9102 SEQ=3613580980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654170D0000000001030307) Feb 1 04:13:49 localhost python3.9[147582]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:50 localhost python3.9[147674]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:13:50 localhost python3[147767]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 1 04:13:51 localhost python3.9[147859]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43230 DF PROTO=TCP SPT=36978 DPT=9100 SEQ=3856211447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654250E0000000001030307) Feb 1 04:13:52 localhost python3.9[147934]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937231.133279-428-82978230770284/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:53 localhost python3.9[148026]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:53 localhost python3.9[148101]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937232.540732-473-13596121964821/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:54 localhost python3.9[148193]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:54 localhost python3.9[148268]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937233.7307591-519-224998040110946/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21739 DF PROTO=TCP SPT=55952 DPT=9101 SEQ=3486246171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654310D0000000001030307) Feb 1 04:13:55 localhost python3.9[148360]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:56 localhost python3.9[148435]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937235.021637-563-167786611964579/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:56 localhost python3.9[148527]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:13:57 localhost python3.9[148602]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937236.3777828-608-72438423011113/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:58 localhost python3.9[148724]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:13:58 localhost python3.9[148870]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:13:59 localhost python3.9[148994]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56978 DF PROTO=TCP SPT=41636 DPT=9882 SEQ=2464527128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654432D0000000001030307) Feb 1 04:14:00 localhost python3.9[149101]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:14:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56979 DF PROTO=TCP SPT=41636 DPT=9882 SEQ=2464527128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654474D0000000001030307) Feb 1 04:14:01 localhost python3.9[149194]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:14:02 localhost python3.9[149288]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:14:02 localhost python3.9[149383]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56980 DF PROTO=TCP SPT=41636 DPT=9882 SEQ=2464527128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6544F4D0000000001030307) Feb 1 04:14:03 localhost python3.9[149473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:14:05 localhost python3.9[149566]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005604215.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:99:13:90:9c" external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:14:05 localhost ovs-vsctl[149567]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005604215.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:99:13:90:9c external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Feb 1 04:14:05 localhost python3.9[149659]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:14:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32301 DF PROTO=TCP SPT=58952 DPT=9102 SEQ=3476738004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6545CCD0000000001030307) Feb 1 04:14:06 localhost python3.9[149752]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:14:07 localhost python3.9[149846]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:08 localhost python3.9[149938]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:08 localhost python3.9[149986]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:09 localhost python3.9[150078]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13061 DF PROTO=TCP SPT=39794 DPT=9100 SEQ=145092015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654694E0000000001030307) Feb 1 04:14:09 localhost python3.9[150126]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:10 localhost python3.9[150218]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:11 localhost python3.9[150310]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:11 localhost python3.9[150358]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:12 localhost python3.9[150450]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47171 DF PROTO=TCP SPT=34220 DPT=9101 SEQ=2025353292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654764E0000000001030307) Feb 1 04:14:13 localhost python3.9[150498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:14 localhost python3.9[150590]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:14:14 localhost systemd[1]: Reloading. Feb 1 04:14:14 localhost systemd-sysv-generator[150619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:14:14 localhost systemd-rc-local-generator[150616]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:14:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:14:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56982 DF PROTO=TCP SPT=41636 DPT=9882 SEQ=2464527128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6547F0D0000000001030307) Feb 1 04:14:16 localhost python3.9[150719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:16 localhost python3.9[150767]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:17 localhost python3.9[150859]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:17 localhost python3.9[150907]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:18 localhost python3.9[150999]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:14:18 localhost systemd[1]: Reloading. Feb 1 04:14:18 localhost systemd-sysv-generator[151027]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:14:18 localhost systemd-rc-local-generator[151024]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:14:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:14:18 localhost systemd[1]: Starting Create netns directory... Feb 1 04:14:18 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 04:14:18 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 04:14:18 localhost systemd[1]: Finished Create netns directory. Feb 1 04:14:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32303 DF PROTO=TCP SPT=58952 DPT=9102 SEQ=3476738004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6548D0D0000000001030307) Feb 1 04:14:20 localhost python3.9[151135]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:21 localhost python3.9[151227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13063 DF PROTO=TCP SPT=39794 DPT=9100 SEQ=145092015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654990D0000000001030307) Feb 1 04:14:22 localhost python3.9[151300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937260.989658-1340-256392541365729/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:23 localhost python3.9[151392]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:23 localhost python3.9[151484]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:24 localhost python3.9[151576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:24 localhost python3.9[151651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937263.908645-1439-32607004191211/.source.json _original_basename=.bmxxkd6m follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47173 DF PROTO=TCP SPT=34220 DPT=9101 SEQ=2025353292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654A70D0000000001030307) Feb 1 04:14:25 localhost python3.9[151741]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:28 localhost python3.9[151994]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Feb 1 04:14:29 localhost python3.9[152086]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 1 04:14:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16169 DF PROTO=TCP SPT=48340 DPT=9882 SEQ=4134950418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654B85D0000000001030307) Feb 1 04:14:30 localhost python3[152178]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False Feb 1 04:14:30 localhost python3[152178]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e",#012 "Digest": "sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:38:56.623500445Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 346422728,#012 "VirtualSize": 346422728,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/5f30d5cd30916d88e24f21a5c8313738088a285d6d2d0efec09cc705e86eb786/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:033e0289d512b27a678c3feb7195acb9c5f2fbb27c9b2d8c8b5b5f6156f0d11f",#012 "sha256:f848a534c5dfe59c31c3da34c3d2466bdea7e8da7def4225acdd3ffef1544d2f"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:55.650316471Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util- Feb 1 04:14:30 localhost podman[152228]: 2026-02-01 09:14:30.814552964 +0000 UTC m=+0.092954164 container remove e9aad77783b40c342d984b2f22c7e9e198801b7dddada155e338bf18c344e257 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container) Feb 1 04:14:30 localhost python3[152178]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Feb 1 04:14:30 localhost podman[152241]: Feb 1 04:14:30 localhost podman[152241]: 2026-02-01 09:14:30.924189184 +0000 UTC m=+0.088020651 container create c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Feb 1 04:14:30 localhost podman[152241]: 2026-02-01 09:14:30.88110272 +0000 UTC m=+0.044934227 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Feb 1 04:14:30 localhost python3[152178]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Feb 1 04:14:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16170 DF PROTO=TCP SPT=48340 DPT=9882 SEQ=4134950418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654BC4E0000000001030307) Feb 1 04:14:32 localhost python3.9[152370]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:14:32 localhost python3.9[152464]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16171 DF PROTO=TCP SPT=48340 DPT=9882 SEQ=4134950418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654C44D0000000001030307) Feb 1 04:14:33 localhost python3.9[152510]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:14:33 localhost python3.9[152601]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769937273.348835-1673-17743743854873/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:34 localhost python3.9[152647]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:14:34 localhost systemd[1]: Reloading. Feb 1 04:14:34 localhost systemd-sysv-generator[152675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:14:34 localhost systemd-rc-local-generator[152671]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:14:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:14:35 localhost python3.9[152729]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:14:35 localhost systemd[1]: Reloading. Feb 1 04:14:35 localhost systemd-rc-local-generator[152757]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:14:35 localhost systemd-sysv-generator[152760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:14:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:14:35 localhost systemd[1]: Starting dnf makecache... Feb 1 04:14:35 localhost systemd[1]: Starting ovn_controller container... Feb 1 04:14:36 localhost systemd[1]: tmp-crun.4xu3db.mount: Deactivated successfully. Feb 1 04:14:36 localhost systemd[1]: Started libcrun container. Feb 1 04:14:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b883fdd7a94716d25b4111e0521450c77982b7d94557f6b979a4ec8b45324f27/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Feb 1 04:14:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:14:36 localhost podman[152772]: 2026-02-01 09:14:36.063196252 +0000 UTC m=+0.180847039 container init c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127) Feb 1 04:14:36 localhost ovn_controller[152787]: + sudo -E kolla_set_configs Feb 1 04:14:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:14:36 localhost podman[152772]: 2026-02-01 09:14:36.095839235 +0000 UTC m=+0.213490042 container start c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:14:36 localhost edpm-start-podman-container[152772]: ovn_controller Feb 1 04:14:36 localhost dnf[152769]: Updating Subscription Management repositories. Feb 1 04:14:36 localhost systemd[1]: Created slice User Slice of UID 0. Feb 1 04:14:36 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 1 04:14:36 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 1 04:14:36 localhost systemd[1]: Starting User Manager for UID 0... Feb 1 04:14:36 localhost podman[152794]: 2026-02-01 09:14:36.20946927 +0000 UTC m=+0.103803291 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller) Feb 1 04:14:36 localhost systemd[152818]: Queued start job for default target Main User Target. Feb 1 04:14:36 localhost systemd[152818]: Created slice User Application Slice. Feb 1 04:14:36 localhost systemd[152818]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 1 04:14:36 localhost systemd[152818]: Started Daily Cleanup of User's Temporary Directories. Feb 1 04:14:36 localhost systemd[152818]: Reached target Paths. Feb 1 04:14:36 localhost systemd[152818]: Reached target Timers. Feb 1 04:14:36 localhost systemd[152818]: Starting D-Bus User Message Bus Socket... Feb 1 04:14:36 localhost systemd[152818]: Starting Create User's Volatile Files and Directories... Feb 1 04:14:36 localhost podman[152794]: 2026-02-01 09:14:36.301529798 +0000 UTC m=+0.195863749 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller) Feb 1 04:14:36 localhost systemd[152818]: Listening on D-Bus User Message Bus Socket. Feb 1 04:14:36 localhost systemd[152818]: Reached target Sockets. Feb 1 04:14:36 localhost podman[152794]: unhealthy Feb 1 04:14:36 localhost systemd[152818]: Finished Create User's Volatile Files and Directories. Feb 1 04:14:36 localhost systemd[152818]: Reached target Basic System. Feb 1 04:14:36 localhost systemd[152818]: Reached target Main User Target. Feb 1 04:14:36 localhost systemd[152818]: Startup finished in 111ms. Feb 1 04:14:36 localhost systemd[1]: Started User Manager for UID 0. Feb 1 04:14:36 localhost systemd-journald[47940]: Field hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Feb 1 04:14:36 localhost systemd-journald[47940]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 04:14:36 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:14:36 localhost systemd[1]: Started Session c11 of User root. Feb 1 04:14:36 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:14:36 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Failed with result 'exit-code'. Feb 1 04:14:36 localhost edpm-start-podman-container[152771]: Creating additional drop-in dependency for "ovn_controller" (c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835) Feb 1 04:14:36 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:14:36 localhost systemd[1]: Reloading. Feb 1 04:14:36 localhost ovn_controller[152787]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 1 04:14:36 localhost ovn_controller[152787]: INFO:__main__:Validating config file Feb 1 04:14:36 localhost ovn_controller[152787]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 1 04:14:36 localhost ovn_controller[152787]: INFO:__main__:Writing out command to execute Feb 1 04:14:36 localhost ovn_controller[152787]: ++ cat /run_command Feb 1 04:14:36 localhost ovn_controller[152787]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Feb 1 04:14:36 localhost ovn_controller[152787]: + ARGS= Feb 1 04:14:36 localhost ovn_controller[152787]: + sudo kolla_copy_cacerts Feb 1 04:14:36 localhost systemd-rc-local-generator[152882]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:14:36 localhost systemd-sysv-generator[152885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:14:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:14:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16437 DF PROTO=TCP SPT=39674 DPT=9102 SEQ=4255682689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654D20E0000000001030307) Feb 1 04:14:36 localhost systemd[1]: session-c11.scope: Deactivated successfully. Feb 1 04:14:36 localhost systemd[1]: Started ovn_controller container. Feb 1 04:14:36 localhost systemd[1]: Started Session c12 of User root. Feb 1 04:14:36 localhost systemd[1]: session-c12.scope: Deactivated successfully. Feb 1 04:14:36 localhost ovn_controller[152787]: + [[ ! -n '' ]] Feb 1 04:14:36 localhost ovn_controller[152787]: + . kolla_extend_start Feb 1 04:14:36 localhost ovn_controller[152787]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Feb 1 04:14:36 localhost ovn_controller[152787]: + umask 0022 Feb 1 04:14:36 localhost ovn_controller[152787]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Feb 1 04:14:36 localhost ovn_controller[152787]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8] Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00004|main|INFO|OVS IDL reconnected, force recompute. Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00013|main|INFO|OVS feature set changed, force recompute. Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00021|main|INFO|OVS feature set changed, force recompute. Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 1 04:14:36 localhost ovn_controller[152787]: 2026-02-01T09:14:36Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 1 04:14:37 localhost dnf[152769]: Metadata cache refreshed recently. Feb 1 04:14:38 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Feb 1 04:14:38 localhost systemd[1]: Finished dnf makecache. Feb 1 04:14:38 localhost systemd[1]: dnf-makecache.service: Consumed 2.143s CPU time. Feb 1 04:14:38 localhost python3.9[152985]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 1 04:14:39 localhost python3.9[153077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45408 DF PROTO=TCP SPT=48746 DPT=9100 SEQ=754574183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654DE8D0000000001030307) Feb 1 04:14:40 localhost python3.9[153150]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937279.1309137-1808-29059686757745/.source.yaml _original_basename=.3jiymto_ follow=False checksum=4ef88525fff00a5112f620461f949f82fa85c4cb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:14:40 localhost python3.9[153242]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:14:40 localhost ovs-vsctl[153243]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Feb 1 04:14:41 localhost python3.9[153335]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:14:41 localhost ovs-vsctl[153337]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Feb 1 04:14:42 localhost python3.9[153430]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:14:42 localhost ovs-vsctl[153431]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Feb 1 04:14:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40900 DF PROTO=TCP SPT=44330 DPT=9101 SEQ=2650133204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654EB8E0000000001030307) Feb 1 04:14:43 localhost systemd[1]: session-49.scope: Deactivated successfully. Feb 1 04:14:43 localhost systemd[1]: session-49.scope: Consumed 41.213s CPU time. Feb 1 04:14:43 localhost systemd-logind[761]: Session 49 logged out. Waiting for processes to exit. Feb 1 04:14:43 localhost systemd-logind[761]: Removed session 49. Feb 1 04:14:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16173 DF PROTO=TCP SPT=48340 DPT=9882 SEQ=4134950418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA654F50D0000000001030307) Feb 1 04:14:46 localhost systemd[1]: Stopping User Manager for UID 0... Feb 1 04:14:46 localhost systemd[152818]: Activating special unit Exit the Session... Feb 1 04:14:46 localhost systemd[152818]: Stopped target Main User Target. Feb 1 04:14:46 localhost systemd[152818]: Stopped target Basic System. Feb 1 04:14:46 localhost systemd[152818]: Stopped target Paths. Feb 1 04:14:46 localhost systemd[152818]: Stopped target Sockets. Feb 1 04:14:46 localhost systemd[152818]: Stopped target Timers. Feb 1 04:14:46 localhost systemd[152818]: Stopped Daily Cleanup of User's Temporary Directories. Feb 1 04:14:46 localhost systemd[152818]: Closed D-Bus User Message Bus Socket. Feb 1 04:14:46 localhost systemd[152818]: Stopped Create User's Volatile Files and Directories. Feb 1 04:14:46 localhost systemd[152818]: Removed slice User Application Slice. Feb 1 04:14:46 localhost systemd[152818]: Reached target Shutdown. Feb 1 04:14:46 localhost systemd[152818]: Finished Exit the Session. Feb 1 04:14:46 localhost systemd[152818]: Reached target Exit the Session. Feb 1 04:14:46 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 1 04:14:46 localhost systemd[1]: Stopped User Manager for UID 0. Feb 1 04:14:46 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 1 04:14:46 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 1 04:14:46 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 1 04:14:46 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 1 04:14:46 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 1 04:14:49 localhost sshd[153448]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:14:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16439 DF PROTO=TCP SPT=39674 DPT=9102 SEQ=4255682689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655030D0000000001030307) Feb 1 04:14:49 localhost systemd-logind[761]: New session 51 of user zuul. Feb 1 04:14:49 localhost systemd[1]: Started Session 51 of User zuul. Feb 1 04:14:50 localhost python3.9[153541]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:14:51 localhost python3.9[153637]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:52 localhost python3.9[153729]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45410 DF PROTO=TCP SPT=48746 DPT=9100 SEQ=754574183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6550F0D0000000001030307) Feb 1 04:14:52 localhost python3.9[153821]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:52 localhost ovn_controller[152787]: 2026-02-01T09:14:52Z|00023|memory|INFO|14972 kB peak resident set size after 16.3 seconds Feb 1 04:14:52 localhost ovn_controller[152787]: 2026-02-01T09:14:52Z|00024|memory|INFO|idl-cells-OVN_Southbound:4033 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3 Feb 1 04:14:53 localhost python3.9[153913]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:54 localhost python3.9[154005]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:54 localhost sshd[154052]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:14:54 localhost python3.9[154097]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:14:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40902 DF PROTO=TCP SPT=44330 DPT=9101 SEQ=2650133204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6551B0D0000000001030307) Feb 1 04:14:56 localhost python3.9[154190]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Feb 1 04:14:57 localhost python3.9[154280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:58 localhost python3.9[154353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937296.7601776-215-265932697695622/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:14:58 localhost python3.9[154443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:14:59 localhost python3.9[154516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937298.2856524-260-276888307442129/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19082 DF PROTO=TCP SPT=35370 DPT=9882 SEQ=883227336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6552D8D0000000001030307) Feb 1 04:15:00 localhost python3.9[154623]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:15:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19083 DF PROTO=TCP SPT=35370 DPT=9882 SEQ=883227336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655318E0000000001030307) Feb 1 04:15:01 localhost python3.9[154723]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:15:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19084 DF PROTO=TCP SPT=35370 DPT=9882 SEQ=883227336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655398D0000000001030307) Feb 1 04:15:05 localhost python3.9[154832]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 1 04:15:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:15:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5367 DF PROTO=TCP SPT=52206 DPT=9102 SEQ=3731879935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655472F0000000001030307) Feb 1 04:15:06 localhost podman[154835]: 2026-02-01 09:15:06.651946773 +0000 UTC m=+0.090044409 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 1 04:15:06 localhost podman[154835]: 2026-02-01 09:15:06.690482598 +0000 UTC m=+0.128580234 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 1 04:15:06 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:15:07 localhost python3.9[154950]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:07 localhost python3.9[155021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937306.777654-371-104713098715086/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:08 localhost python3.9[155111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:08 localhost python3.9[155182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937307.8167362-371-53900048470524/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28239 DF PROTO=TCP SPT=38894 DPT=9100 SEQ=4282933993 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655538D0000000001030307) Feb 1 04:15:10 localhost python3.9[155272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:10 localhost python3.9[155343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937309.747433-504-103882764735297/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:11 localhost python3.9[155433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:11 localhost python3.9[155504]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937310.8610072-504-164437226338806/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:12 localhost python3.9[155594]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:15:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34642 DF PROTO=TCP SPT=48596 DPT=9101 SEQ=3909676888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65560CD0000000001030307) Feb 1 04:15:13 localhost python3.9[155688]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:14 localhost python3.9[155780]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:14 localhost python3.9[155828]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19086 DF PROTO=TCP SPT=35370 DPT=9882 SEQ=883227336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655690D0000000001030307) Feb 1 04:15:15 localhost python3.9[155920]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:15 localhost python3.9[155968]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:17 localhost python3.9[156060]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:18 localhost python3.9[156152]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:18 localhost python3.9[156200]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5369 DF PROTO=TCP SPT=52206 DPT=9102 SEQ=3731879935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655770D0000000001030307) Feb 1 04:15:19 localhost python3.9[156292]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:19 localhost python3.9[156340]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:20 localhost python3.9[156432]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:15:20 localhost systemd[1]: Reloading. Feb 1 04:15:20 localhost systemd-sysv-generator[156463]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:15:20 localhost systemd-rc-local-generator[156459]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:15:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:15:21 localhost python3.9[156562]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28241 DF PROTO=TCP SPT=38894 DPT=9100 SEQ=4282933993 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655830D0000000001030307) Feb 1 04:15:22 localhost python3.9[156610]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:22 localhost python3.9[156702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:23 localhost python3.9[156750]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:24 localhost python3.9[156842]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:15:24 localhost systemd[1]: Reloading. Feb 1 04:15:24 localhost systemd-rc-local-generator[156863]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:15:24 localhost systemd-sysv-generator[156870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:15:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:15:24 localhost systemd[1]: Starting Create netns directory... Feb 1 04:15:24 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 04:15:24 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 04:15:24 localhost systemd[1]: Finished Create netns directory. Feb 1 04:15:25 localhost python3.9[156976]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34644 DF PROTO=TCP SPT=48596 DPT=9101 SEQ=3909676888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655910D0000000001030307) Feb 1 04:15:26 localhost python3.9[157068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:26 localhost python3.9[157141]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937325.620699-956-130211412349834/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:27 localhost python3.9[157233]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:28 localhost python3.9[157325]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:15:29 localhost python3.9[157417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:29 localhost python3.9[157492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937328.5966964-1055-108474674340956/.source.json _original_basename=.dj85r6xt follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28013 DF PROTO=TCP SPT=41122 DPT=9882 SEQ=1891771518 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655A2BD0000000001030307) Feb 1 04:15:30 localhost python3.9[157582]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28014 DF PROTO=TCP SPT=41122 DPT=9882 SEQ=1891771518 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655A6CE0000000001030307) Feb 1 04:15:32 localhost python3.9[157835]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Feb 1 04:15:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28015 DF PROTO=TCP SPT=41122 DPT=9882 SEQ=1891771518 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655AECD0000000001030307) Feb 1 04:15:33 localhost python3.9[157927]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 1 04:15:34 localhost python3[158019]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False Feb 1 04:15:34 localhost python3[158019]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8",#012 "Digest": "sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:29:34.446261637Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 785500417,#012 "VirtualSize": 785500417,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc/diff:/var/lib/containers/storage/overlay/33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",#012 "sha256:d3cc9cdab7e3e7c1a0a6c80e61bbd8cc5eeeba7069bab1cc064ed2e6cc28ed58",#012 "sha256:d5cbf3016eca6267717119e8ebab3c6c083cae6c589c6961ae23bfa93ef3afa4",#012 "sha256:0096ee5d07436ac5b94d9d58b8b2407cc5e6854d70de5e7f89b9a7a1ad4912ad"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.con Feb 1 04:15:35 localhost podman[158071]: 2026-02-01 09:15:35.069941399 +0000 UTC m=+0.090172084 container remove e8f71eedc0903bc3ad8d322be8c78b9b0d301bf239e5520b80b84e9b01b99e06 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08ca8fb8877681656a098784127ead43'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 1 04:15:35 localhost python3[158019]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Feb 1 04:15:35 localhost podman[158084]: Feb 1 04:15:35 localhost podman[158084]: 2026-02-01 09:15:35.175193307 +0000 UTC m=+0.083183911 container create 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:15:35 localhost podman[158084]: 2026-02-01 09:15:35.135078335 +0000 UTC m=+0.043068989 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 1 04:15:35 localhost python3[158019]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 1 04:15:36 localhost python3.9[158210]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:15:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43372 DF PROTO=TCP SPT=55622 DPT=9102 SEQ=2315680981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655BC4E0000000001030307) Feb 1 04:15:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:15:36 localhost podman[158305]: 2026-02-01 09:15:36.865828431 +0000 UTC m=+0.077128301 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 1 04:15:36 localhost python3.9[158304]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:36 localhost podman[158305]: 2026-02-01 09:15:36.967954477 +0000 UTC m=+0.179254307 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:15:36 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:15:37 localhost python3.9[158375]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:15:38 localhost python3.9[158466]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769937337.465682-1289-60329146993601/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:38 localhost python3.9[158512]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:15:38 localhost systemd[1]: Reloading. Feb 1 04:15:38 localhost systemd-rc-local-generator[158535]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:15:38 localhost systemd-sysv-generator[158539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:15:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:15:39 localhost python3.9[158594]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:15:39 localhost systemd[1]: Reloading. Feb 1 04:15:39 localhost systemd-rc-local-generator[158620]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:15:39 localhost systemd-sysv-generator[158623]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:15:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:15:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1963 DF PROTO=TCP SPT=47986 DPT=9100 SEQ=1472699613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655C8CD0000000001030307) Feb 1 04:15:39 localhost systemd[1]: Starting ovn_metadata_agent container... Feb 1 04:15:39 localhost systemd[1]: Started libcrun container. Feb 1 04:15:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1847993324db0de5919ae17da5f618058e92eb21b99b51136b8b34c925eccdd/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 1 04:15:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1847993324db0de5919ae17da5f618058e92eb21b99b51136b8b34c925eccdd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:15:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:15:40 localhost podman[158636]: 2026-02-01 09:15:40.030895289 +0000 UTC m=+0.152396264 container init 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: + sudo -E kolla_set_configs Feb 1 04:15:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:15:40 localhost podman[158636]: 2026-02-01 09:15:40.074602411 +0000 UTC m=+0.196103416 container start 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2) Feb 1 04:15:40 localhost edpm-start-podman-container[158636]: ovn_metadata_agent Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Validating config file Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Copying service configuration files Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Writing out command to execute Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: ++ cat /run_command Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: + CMD=neutron-ovn-metadata-agent Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: + ARGS= Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: + sudo kolla_copy_cacerts Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: Running command: 'neutron-ovn-metadata-agent' Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: + [[ ! -n '' ]] Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: + . kolla_extend_start Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: + umask 0022 Feb 1 04:15:40 localhost ovn_metadata_agent[158650]: + exec neutron-ovn-metadata-agent Feb 1 04:15:40 localhost podman[158658]: 2026-02-01 09:15:40.164069508 +0000 UTC m=+0.083862916 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 1 04:15:40 localhost podman[158658]: 2026-02-01 09:15:40.242689812 +0000 UTC m=+0.162483170 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:15:40 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:15:40 localhost edpm-start-podman-container[158635]: Creating additional drop-in dependency for "ovn_metadata_agent" (412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5) Feb 1 04:15:40 localhost systemd[1]: Reloading. Feb 1 04:15:40 localhost systemd-sysv-generator[158731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:15:40 localhost systemd-rc-local-generator[158726]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:15:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:15:40 localhost systemd[1]: Started ovn_metadata_agent container. Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.698 158655 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.698 158655 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.698 158655 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.699 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.699 158655 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.699 158655 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.699 158655 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.699 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.699 158655 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.699 158655 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.699 158655 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.699 158655 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.700 158655 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.701 158655 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.702 158655 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.702 158655 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.702 158655 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.702 158655 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.702 158655 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.702 158655 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005604215.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.702 158655 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.702 158655 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.702 158655 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.703 158655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.704 158655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.705 158655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.705 158655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.705 158655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.705 158655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.705 158655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.705 158655 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.705 158655 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.705 158655 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.705 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.706 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.706 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.706 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.706 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.706 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.706 158655 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.706 158655 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.706 158655 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.706 158655 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.707 158655 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.708 158655 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.709 158655 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.709 158655 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.709 158655 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.709 158655 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.709 158655 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.709 158655 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.709 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.709 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.709 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.710 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.711 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.711 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.711 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.711 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.711 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.711 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.711 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.711 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.711 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.712 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.712 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.712 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.712 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.712 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.712 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.712 158655 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.712 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.712 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.713 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.714 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.715 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.715 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.715 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.715 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.715 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.715 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.715 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.715 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.715 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.716 158655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.717 158655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.717 158655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.717 158655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.717 158655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.717 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.717 158655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.717 158655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.717 158655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.717 158655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.718 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.719 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.720 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.720 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.720 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.720 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.720 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.720 158655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.720 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.720 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.720 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.721 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.722 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.723 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.724 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.724 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.724 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.724 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.724 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.724 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.724 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.724 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.724 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.725 158655 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.726 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.727 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.727 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.727 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.727 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.727 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.727 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.727 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.727 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.727 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.728 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.729 158655 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.738 158655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.738 158655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.738 158655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.738 158655 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.739 158655 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.752 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name f18e6148-4a7e-452d-80cb-72c86b59e439 (UUID: f18e6148-4a7e-452d-80cb-72c86b59e439) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.770 158655 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.770 158655 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.770 158655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.770 158655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.772 158655 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.774 158655 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.783 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'f18e6148-4a7e-452d-80cb-72c86b59e439'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': '313b4605-18bf-5934-ac37-75f1eb3b119e', 'neutron:ovn-metadata-sb-cfg': '1'}, name=f18e6148-4a7e-452d-80cb-72c86b59e439, nb_cfg_timestamp=1769937286372, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.784 158655 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.785 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.785 158655 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.785 158655 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.785 158655 INFO oslo_service.service [-] Starting 1 workers#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.787 158655 DEBUG oslo_service.service [-] Started child 158755 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.789 158655 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpez0k0e0b/privsep.sock']#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.791 158755 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-259738'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.817 158755 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.818 158755 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.818 158755 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.824 158755 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.825 158755 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Feb 1 04:15:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:41.838 158755 INFO eventlet.wsgi.server [-] (158755) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.386 158655 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.387 158655 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpez0k0e0b/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.274 158836 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.279 158836 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.282 158836 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.283 158836 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158836#033[00m Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.390 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[d32cc769-0367-47d5-9345-6f725f7094ca]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:15:42 localhost python3.9[158835]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.788 158836 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.788 158836 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:15:42 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:42.788 158836 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:15:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20652 DF PROTO=TCP SPT=60234 DPT=9101 SEQ=1325276629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655D60D0000000001030307) Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.224 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[eaaaf608-b59e-444a-a034-f44dd306af60]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.228 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, column=external_ids, values=({'neutron:ovn-metadata-id': '313b4605-18bf-5934-ac37-75f1eb3b119e'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.229 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.230 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.240 158655 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.240 158655 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.240 158655 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.240 158655 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.240 158655 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.240 158655 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.240 158655 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.241 158655 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.241 158655 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.241 158655 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.241 158655 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.241 158655 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.242 158655 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.242 158655 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.242 158655 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.242 158655 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.242 158655 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.242 158655 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.242 158655 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.243 158655 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.243 158655 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.243 158655 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.243 158655 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.243 158655 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.243 158655 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.244 158655 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.244 158655 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.244 158655 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.244 158655 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.244 158655 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.244 158655 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.244 158655 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.245 158655 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.245 158655 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.245 158655 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.245 158655 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.245 158655 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.245 158655 DEBUG oslo_service.service [-] host = np0005604215.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.246 158655 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.246 158655 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.246 158655 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.246 158655 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.246 158655 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.246 158655 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.246 158655 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.247 158655 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.247 158655 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.247 158655 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.247 158655 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.247 158655 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.247 158655 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.247 158655 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.248 158655 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.248 158655 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.248 158655 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.248 158655 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.248 158655 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.248 158655 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.248 158655 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.248 158655 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.249 158655 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.249 158655 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.249 158655 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.249 158655 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.249 158655 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.249 158655 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.249 158655 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.250 158655 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.250 158655 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.250 158655 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.250 158655 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.250 158655 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.250 158655 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.250 158655 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.251 158655 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.251 158655 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.251 158655 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.251 158655 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.251 158655 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.251 158655 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.251 158655 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.251 158655 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.252 158655 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.252 158655 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.252 158655 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.252 158655 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.252 158655 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.252 158655 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.252 158655 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.253 158655 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.253 158655 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.253 158655 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.253 158655 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.253 158655 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.253 158655 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.253 158655 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.253 158655 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.254 158655 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.254 158655 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.254 158655 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.254 158655 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.254 158655 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.254 158655 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.254 158655 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.255 158655 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.255 158655 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.255 158655 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.255 158655 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.255 158655 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.255 158655 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.255 158655 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.256 158655 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.256 158655 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.256 158655 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.256 158655 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.256 158655 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.256 158655 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.256 158655 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.257 158655 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.257 158655 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.257 158655 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.257 158655 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.257 158655 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.257 158655 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.258 158655 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.258 158655 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.258 158655 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.258 158655 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.258 158655 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.258 158655 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.258 158655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.259 158655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.259 158655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.259 158655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.259 158655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.259 158655 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.259 158655 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.259 158655 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.260 158655 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.260 158655 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.260 158655 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.260 158655 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.260 158655 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.260 158655 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.260 158655 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.261 158655 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.261 158655 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.261 158655 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.261 158655 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.261 158655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.261 158655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.261 158655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.262 158655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.262 158655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.262 158655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.262 158655 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.262 158655 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.262 158655 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.262 158655 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.262 158655 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.263 158655 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.263 158655 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.263 158655 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.263 158655 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.263 158655 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.263 158655 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.263 158655 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.264 158655 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.264 158655 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.264 158655 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.264 158655 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.264 158655 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.264 158655 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.264 158655 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.264 158655 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.265 158655 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.265 158655 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.265 158655 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.265 158655 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.265 158655 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.265 158655 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.265 158655 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.266 158655 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.266 158655 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.266 158655 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.266 158655 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.266 158655 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.266 158655 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.266 158655 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.266 158655 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.267 158655 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.267 158655 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.267 158655 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.267 158655 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.267 158655 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.267 158655 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.267 158655 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.268 158655 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.268 158655 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.268 158655 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.268 158655 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.268 158655 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.268 158655 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.269 158655 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.269 158655 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.269 158655 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.269 158655 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.269 158655 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.269 158655 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.269 158655 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.269 158655 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.270 158655 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.270 158655 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.270 158655 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.270 158655 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.270 158655 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.270 158655 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.270 158655 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.271 158655 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.271 158655 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.271 158655 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.271 158655 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.271 158655 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.271 158655 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.271 158655 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.272 158655 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.272 158655 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.272 158655 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.272 158655 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.272 158655 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.272 158655 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.272 158655 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.272 158655 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.273 158655 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.273 158655 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.273 158655 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.273 158655 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.273 158655 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.273 158655 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.274 158655 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.274 158655 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.274 158655 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.274 158655 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.274 158655 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.274 158655 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.275 158655 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.275 158655 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.275 158655 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.275 158655 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.275 158655 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.275 158655 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.276 158655 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.276 158655 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.276 158655 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.276 158655 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.276 158655 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.276 158655 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.277 158655 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.277 158655 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.277 158655 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.277 158655 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.277 158655 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.277 158655 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.278 158655 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.278 158655 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.278 158655 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.278 158655 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.278 158655 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.278 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.279 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.279 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.279 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.279 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.279 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.279 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.280 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.280 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.280 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.280 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.280 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.280 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.281 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.281 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.281 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.281 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.281 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.281 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.282 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.282 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.282 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.282 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.282 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.282 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.282 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.283 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.283 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.283 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.283 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.283 158655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.283 158655 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.284 158655 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.284 158655 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.284 158655 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:15:43 localhost ovn_metadata_agent[158650]: 2026-02-01 09:15:43.284 158655 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 1 04:15:43 localhost python3.9[158932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:15:44 localhost python3.9[159007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937343.164618-1424-268132857875924/.source.yaml _original_basename=.xipr1w40 follow=False checksum=08b98aaf8b4739d4298bc1690447f4cee3a9ba74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:15:44 localhost systemd-logind[761]: Session 51 logged out. Waiting for processes to exit. Feb 1 04:15:44 localhost systemd[1]: session-51.scope: Deactivated successfully. Feb 1 04:15:44 localhost systemd[1]: session-51.scope: Consumed 32.383s CPU time. Feb 1 04:15:44 localhost systemd-logind[761]: Removed session 51. Feb 1 04:15:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28017 DF PROTO=TCP SPT=41122 DPT=9882 SEQ=1891771518 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655DF0D0000000001030307) Feb 1 04:15:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43374 DF PROTO=TCP SPT=55622 DPT=9102 SEQ=2315680981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655ED0E0000000001030307) Feb 1 04:15:50 localhost sshd[159022]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:15:50 localhost systemd-logind[761]: New session 52 of user zuul. Feb 1 04:15:50 localhost systemd[1]: Started Session 52 of User zuul. Feb 1 04:15:51 localhost python3.9[159115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:15:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1965 DF PROTO=TCP SPT=47986 DPT=9100 SEQ=1472699613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA655F90E0000000001030307) Feb 1 04:15:52 localhost python3.9[159211]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:15:53 localhost python3.9[159316]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:15:53 localhost systemd[1]: libpod-80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5.scope: Deactivated successfully. Feb 1 04:15:53 localhost podman[159317]: 2026-02-01 09:15:53.28154865 +0000 UTC m=+0.078452390 container died 80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, build-date=2026-01-12T23:31:49Z, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 1 04:15:53 localhost systemd[1]: tmp-crun.Cf2hHx.mount: Deactivated successfully. Feb 1 04:15:53 localhost podman[159317]: 2026-02-01 09:15:53.317461169 +0000 UTC m=+0.114364909 container cleanup 80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 1 04:15:53 localhost podman[159332]: 2026-02-01 09:15:53.35841566 +0000 UTC m=+0.072406300 container remove 80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, release=1766032510, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team) Feb 1 04:15:53 localhost systemd[1]: libpod-conmon-80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5.scope: Deactivated successfully. Feb 1 04:15:54 localhost systemd[1]: tmp-crun.OBPeod.mount: Deactivated successfully. Feb 1 04:15:54 localhost systemd[1]: var-lib-containers-storage-overlay-79513147d587ecdcd7bb2edc01bb3b7dc549ee20844dd0dc1e7a6b286443d3ff-merged.mount: Deactivated successfully. Feb 1 04:15:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80e1d00e5bec4cd7f19cd10160562eb6b3744e0bb96cf5719096238b43ba4ee5-userdata-shm.mount: Deactivated successfully. Feb 1 04:15:54 localhost python3.9[159436]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:15:54 localhost systemd[1]: Reloading. Feb 1 04:15:54 localhost systemd-rc-local-generator[159459]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:15:54 localhost systemd-sysv-generator[159464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:15:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:15:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20654 DF PROTO=TCP SPT=60234 DPT=9101 SEQ=1325276629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656070D0000000001030307) Feb 1 04:15:56 localhost python3.9[159561]: ansible-ansible.builtin.service_facts Invoked Feb 1 04:15:56 localhost network[159578]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:15:56 localhost network[159579]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:15:56 localhost network[159580]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:15:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:16:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9967 DF PROTO=TCP SPT=55966 DPT=9882 SEQ=1571398132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65617ED0000000001030307) Feb 1 04:16:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9968 DF PROTO=TCP SPT=55966 DPT=9882 SEQ=1571398132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6561C0E0000000001030307) Feb 1 04:16:02 localhost python3.9[159812]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:16:02 localhost systemd[1]: Reloading. Feb 1 04:16:02 localhost systemd-rc-local-generator[159875]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:16:02 localhost systemd-sysv-generator[159878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:16:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:16:02 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Feb 1 04:16:02 localhost systemd[1]: tmp-crun.mRrbdc.mount: Deactivated successfully. Feb 1 04:16:02 localhost podman[159939]: 2026-02-01 09:16:02.577141101 +0000 UTC m=+0.092687934 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, vcs-type=git, release=1764794109, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64) Feb 1 04:16:02 localhost podman[159939]: 2026-02-01 09:16:02.692210064 +0000 UTC m=+0.207756927 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, GIT_BRANCH=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, RELEASE=main) Feb 1 04:16:03 localhost python3.9[160065]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:16:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9969 DF PROTO=TCP SPT=55966 DPT=9882 SEQ=1571398132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656240E0000000001030307) Feb 1 04:16:03 localhost python3.9[160218]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:16:04 localhost python3.9[160343]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:16:05 localhost python3.9[160436]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:16:06 localhost python3.9[160529]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:16:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3112 DF PROTO=TCP SPT=40488 DPT=9102 SEQ=1685993537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656318D0000000001030307) Feb 1 04:16:07 localhost python3.9[160622]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:16:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:16:07 localhost podman[160624]: 2026-02-01 09:16:07.213670454 +0000 UTC m=+0.059092399 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true) Feb 1 04:16:07 localhost podman[160624]: 2026-02-01 09:16:07.294740487 +0000 UTC m=+0.140162452 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible) Feb 1 04:16:07 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:16:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36679 DF PROTO=TCP SPT=33312 DPT=9100 SEQ=1524629492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6563E0D0000000001030307) Feb 1 04:16:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:16:10 localhost podman[160743]: 2026-02-01 09:16:10.578673895 +0000 UTC m=+0.081616133 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 1 04:16:10 localhost podman[160743]: 2026-02-01 09:16:10.609323704 +0000 UTC m=+0.112265822 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:16:10 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:16:10 localhost python3.9[160742]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:11 localhost python3.9[160853]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:11 localhost python3.9[160945]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20655 DF PROTO=TCP SPT=60234 DPT=9101 SEQ=1325276629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656470D0000000001030307) Feb 1 04:16:12 localhost python3.9[161037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:13 localhost python3.9[161129]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:13 localhost sshd[161222]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:16:13 localhost python3.9[161221]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:14 localhost python3.9[161315]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:15 localhost python3.9[161407]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9971 DF PROTO=TCP SPT=55966 DPT=9882 SEQ=1571398132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656550D0000000001030307) Feb 1 04:16:15 localhost python3.9[161499]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:16 localhost python3.9[161591]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:17 localhost python3.9[161683]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:17 localhost python3.9[161775]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:18 localhost python3.9[161867]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3114 DF PROTO=TCP SPT=40488 DPT=9102 SEQ=1685993537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656610D0000000001030307) Feb 1 04:16:19 localhost python3.9[161959]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:16:20 localhost python3.9[162051]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:16:21 localhost python3.9[162143]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 1 04:16:22 localhost python3.9[162235]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:16:22 localhost systemd[1]: Reloading. Feb 1 04:16:22 localhost systemd-sysv-generator[162262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:16:22 localhost systemd-rc-local-generator[162256]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:16:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:16:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36681 DF PROTO=TCP SPT=33312 DPT=9100 SEQ=1524629492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6566F0E0000000001030307) Feb 1 04:16:23 localhost python3.9[162363]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:16:23 localhost python3.9[162456]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:16:24 localhost python3.9[162549]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:16:25 localhost python3.9[162642]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:16:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45909 DF PROTO=TCP SPT=54704 DPT=9101 SEQ=2485291887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6567B0D0000000001030307) Feb 1 04:16:25 localhost python3.9[162735]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:16:26 localhost python3.9[162828]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:16:26 localhost python3.9[162921]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:16:28 localhost python3.9[163014]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Feb 1 04:16:29 localhost python3.9[163107]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Feb 1 04:16:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12654 DF PROTO=TCP SPT=52276 DPT=9882 SEQ=1185292542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6568D1D0000000001030307) Feb 1 04:16:30 localhost python3.9[163205]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604215.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Feb 1 04:16:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12655 DF PROTO=TCP SPT=52276 DPT=9882 SEQ=1185292542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656910D0000000001030307) Feb 1 04:16:31 localhost python3.9[163305]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:16:32 localhost python3.9[163359]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:16:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12656 DF PROTO=TCP SPT=52276 DPT=9882 SEQ=1185292542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656990D0000000001030307) Feb 1 04:16:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1971 DF PROTO=TCP SPT=37162 DPT=9102 SEQ=2952759018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656A6CD0000000001030307) Feb 1 04:16:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:16:37 localhost podman[163426]: 2026-02-01 09:16:37.87053399 +0000 UTC m=+0.086588895 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Feb 1 04:16:37 localhost podman[163426]: 2026-02-01 09:16:37.975324299 +0000 UTC m=+0.191379154 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:16:37 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:16:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15171 DF PROTO=TCP SPT=59796 DPT=9100 SEQ=4276628298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656B34E0000000001030307) Feb 1 04:16:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:16:40 localhost podman[163455]: 2026-02-01 09:16:40.885019012 +0000 UTC m=+0.087027308 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:16:40 localhost podman[163455]: 2026-02-01 09:16:40.894672351 +0000 UTC m=+0.096680637 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 1 04:16:40 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:16:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:16:41.732 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:16:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:16:41.733 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:16:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:16:41.733 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:16:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47358 DF PROTO=TCP SPT=44292 DPT=9101 SEQ=2416393332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656C04D0000000001030307) Feb 1 04:16:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12658 DF PROTO=TCP SPT=52276 DPT=9882 SEQ=1185292542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656C90E0000000001030307) Feb 1 04:16:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1973 DF PROTO=TCP SPT=37162 DPT=9102 SEQ=2952759018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656D70D0000000001030307) Feb 1 04:16:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15173 DF PROTO=TCP SPT=59796 DPT=9100 SEQ=4276628298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656E30D0000000001030307) Feb 1 04:16:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47360 DF PROTO=TCP SPT=44292 DPT=9101 SEQ=2416393332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA656F10E0000000001030307) Feb 1 04:16:57 localhost kernel: SELinux: Converting 2744 SID table entries... Feb 1 04:16:57 localhost kernel: SELinux: Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped). Feb 1 04:16:57 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 04:16:57 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 04:16:57 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 04:16:57 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 04:16:57 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 04:16:57 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 04:16:57 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 04:17:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18894 DF PROTO=TCP SPT=46190 DPT=9882 SEQ=1165195499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657024C0000000001030307) Feb 1 04:17:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18895 DF PROTO=TCP SPT=46190 DPT=9882 SEQ=1165195499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657064D0000000001030307) Feb 1 04:17:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18896 DF PROTO=TCP SPT=46190 DPT=9882 SEQ=1165195499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6570E4D0000000001030307) Feb 1 04:17:04 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=19 res=1 Feb 1 04:17:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4776 DF PROTO=TCP SPT=58730 DPT=9102 SEQ=338499038 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6571BCD0000000001030307) Feb 1 04:17:07 localhost kernel: SELinux: Converting 2747 SID table entries... Feb 1 04:17:07 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 04:17:07 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 04:17:07 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 04:17:07 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 04:17:07 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 04:17:07 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 04:17:07 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 04:17:08 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=20 res=1 Feb 1 04:17:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:17:08 localhost podman[164608]: 2026-02-01 09:17:08.878157646 +0000 UTC m=+0.083058308 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:17:08 localhost podman[164608]: 2026-02-01 09:17:08.92234972 +0000 UTC m=+0.127250382 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0) Feb 1 04:17:08 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:17:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63894 DF PROTO=TCP SPT=54936 DPT=9100 SEQ=3599968820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657284D0000000001030307) Feb 1 04:17:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:17:11 localhost podman[164637]: 2026-02-01 09:17:11.860841617 +0000 UTC m=+0.076723099 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 04:17:11 localhost podman[164637]: 2026-02-01 09:17:11.869666651 +0000 UTC m=+0.085548133 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 1 04:17:11 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:17:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42629 DF PROTO=TCP SPT=53314 DPT=9101 SEQ=1484207914 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657358D0000000001030307) Feb 1 04:17:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18898 DF PROTO=TCP SPT=46190 DPT=9882 SEQ=1165195499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6573F0D0000000001030307) Feb 1 04:17:17 localhost kernel: SELinux: Converting 2750 SID table entries... Feb 1 04:17:17 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 04:17:17 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 04:17:17 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 04:17:17 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 04:17:17 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 04:17:17 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 04:17:17 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 04:17:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4778 DF PROTO=TCP SPT=58730 DPT=9102 SEQ=338499038 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6574B0D0000000001030307) Feb 1 04:17:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63896 DF PROTO=TCP SPT=54936 DPT=9100 SEQ=3599968820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657590D0000000001030307) Feb 1 04:17:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42631 DF PROTO=TCP SPT=53314 DPT=9101 SEQ=1484207914 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657650D0000000001030307) Feb 1 04:17:25 localhost kernel: SELinux: Converting 2750 SID table entries... Feb 1 04:17:25 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 04:17:25 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 04:17:25 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 04:17:25 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 04:17:25 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 04:17:25 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 04:17:25 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 04:17:26 localhost systemd[1]: Reloading. Feb 1 04:17:26 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=22 res=1 Feb 1 04:17:26 localhost systemd-rc-local-generator[164696]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:17:26 localhost systemd-sysv-generator[164701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:17:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:17:26 localhost systemd[1]: Reloading. Feb 1 04:17:26 localhost systemd-sysv-generator[164737]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:17:26 localhost systemd-rc-local-generator[164732]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:17:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:17:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36879 DF PROTO=TCP SPT=56628 DPT=9882 SEQ=47570602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657777D0000000001030307) Feb 1 04:17:30 localhost sshd[164751]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:17:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36880 DF PROTO=TCP SPT=56628 DPT=9882 SEQ=47570602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6577B8D0000000001030307) Feb 1 04:17:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36881 DF PROTO=TCP SPT=56628 DPT=9882 SEQ=47570602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657838D0000000001030307) Feb 1 04:17:35 localhost kernel: SELinux: Converting 2751 SID table entries... Feb 1 04:17:35 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 1 04:17:35 localhost kernel: SELinux: policy capability open_perms=1 Feb 1 04:17:35 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 1 04:17:35 localhost kernel: SELinux: policy capability always_check_network=0 Feb 1 04:17:35 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 1 04:17:35 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 1 04:17:35 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 1 04:17:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21995 DF PROTO=TCP SPT=33846 DPT=9102 SEQ=3172487913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657910D0000000001030307) Feb 1 04:17:39 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=23 res=1 Feb 1 04:17:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:17:39 localhost podman[164805]: 2026-02-01 09:17:39.425011125 +0000 UTC m=+0.089280158 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller) Feb 1 04:17:39 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Feb 1 04:17:39 localhost podman[164805]: 2026-02-01 09:17:39.490064453 +0000 UTC m=+0.154333466 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:17:39 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:17:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38161 DF PROTO=TCP SPT=37594 DPT=9100 SEQ=3871358318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6579D8D0000000001030307) Feb 1 04:17:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:17:41.733 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:17:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:17:41.734 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:17:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:17:41.734 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:17:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:17:42 localhost podman[164851]: 2026-02-01 09:17:42.858732186 +0000 UTC m=+0.073545300 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:17:42 localhost podman[164851]: 2026-02-01 09:17:42.864334653 +0000 UTC m=+0.079147757 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:17:42 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:17:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36817 DF PROTO=TCP SPT=56806 DPT=9101 SEQ=3784984786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657AACD0000000001030307) Feb 1 04:17:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36883 DF PROTO=TCP SPT=56628 DPT=9882 SEQ=47570602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657B30D0000000001030307) Feb 1 04:17:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21997 DF PROTO=TCP SPT=33846 DPT=9102 SEQ=3172487913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657C10D0000000001030307) Feb 1 04:17:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38163 DF PROTO=TCP SPT=37594 DPT=9100 SEQ=3871358318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657CD0D0000000001030307) Feb 1 04:17:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36819 DF PROTO=TCP SPT=56806 DPT=9101 SEQ=3784984786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657DB0D0000000001030307) Feb 1 04:18:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42648 DF PROTO=TCP SPT=50878 DPT=9882 SEQ=1958605745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657ECAC0000000001030307) Feb 1 04:18:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42649 DF PROTO=TCP SPT=50878 DPT=9882 SEQ=1958605745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657F0CD0000000001030307) Feb 1 04:18:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42650 DF PROTO=TCP SPT=50878 DPT=9882 SEQ=1958605745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA657F8CD0000000001030307) Feb 1 04:18:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29942 DF PROTO=TCP SPT=38818 DPT=9102 SEQ=1030711405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658064D0000000001030307) Feb 1 04:18:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20393 DF PROTO=TCP SPT=44070 DPT=9100 SEQ=2168692773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65812CD0000000001030307) Feb 1 04:18:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:18:09 localhost podman[178219]: 2026-02-01 09:18:09.876277706 +0000 UTC m=+0.083242880 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true) Feb 1 04:18:09 localhost podman[178219]: 2026-02-01 09:18:09.980704806 +0000 UTC m=+0.187669960 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:18:09 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:18:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62856 DF PROTO=TCP SPT=58168 DPT=9101 SEQ=3083926355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6581FCE0000000001030307) Feb 1 04:18:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:18:13 localhost podman[181599]: 2026-02-01 09:18:13.921527618 +0000 UTC m=+0.137141955 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent) Feb 1 04:18:13 localhost podman[181599]: 2026-02-01 09:18:13.95419796 +0000 UTC m=+0.169812317 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:18:13 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:18:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42652 DF PROTO=TCP SPT=50878 DPT=9882 SEQ=1958605745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658290E0000000001030307) Feb 1 04:18:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29944 DF PROTO=TCP SPT=38818 DPT=9102 SEQ=1030711405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658370D0000000001030307) Feb 1 04:18:20 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 1 04:18:20 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 1 04:18:20 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 1 04:18:20 localhost systemd[1]: sshd.service: Consumed 1.652s CPU time, read 32.0K from disk, written 0B to disk. Feb 1 04:18:20 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 1 04:18:20 localhost systemd[1]: Stopping sshd-keygen.target... Feb 1 04:18:20 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 04:18:20 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 04:18:20 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 1 04:18:20 localhost systemd[1]: Reached target sshd-keygen.target. Feb 1 04:18:20 localhost systemd[1]: Starting OpenSSH server daemon... Feb 1 04:18:20 localhost sshd[182822]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:18:20 localhost systemd[1]: Started OpenSSH server daemon. Feb 1 04:18:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:20 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:20 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:20 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:20 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:21 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20395 DF PROTO=TCP SPT=44070 DPT=9100 SEQ=2168692773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658430E0000000001030307) Feb 1 04:18:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 04:18:22 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 04:18:22 localhost systemd[1]: Reloading. Feb 1 04:18:22 localhost systemd-rc-local-generator[183078]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:22 localhost systemd-sysv-generator[183082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:22 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 04:18:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 04:18:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62858 DF PROTO=TCP SPT=58168 DPT=9101 SEQ=3083926355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6584F0D0000000001030307) Feb 1 04:18:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49544 DF PROTO=TCP SPT=45860 DPT=9882 SEQ=2523180970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65861DD0000000001030307) Feb 1 04:18:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49545 DF PROTO=TCP SPT=45860 DPT=9882 SEQ=2523180970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65865CD0000000001030307) Feb 1 04:18:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49546 DF PROTO=TCP SPT=45860 DPT=9882 SEQ=2523180970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6586DCD0000000001030307) Feb 1 04:18:33 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 04:18:33 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 04:18:33 localhost systemd[1]: man-db-cache-update.service: Consumed 13.681s CPU time. Feb 1 04:18:33 localhost systemd[1]: run-re1bb5755a20945de89da22f2d015ad2c.service: Deactivated successfully. Feb 1 04:18:33 localhost systemd[1]: run-rcaf97bb4a3794f97a11806f5f12b390e.service: Deactivated successfully. Feb 1 04:18:34 localhost python3.9[191658]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 1 04:18:35 localhost systemd[1]: Reloading. Feb 1 04:18:35 localhost systemd-rc-local-generator[191686]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:35 localhost systemd-sysv-generator[191690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:35 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:35 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:35 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:35 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:35 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:35 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:35 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:35 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:36 localhost python3.9[191807]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 1 04:18:36 localhost systemd[1]: Reloading. Feb 1 04:18:36 localhost systemd-rc-local-generator[191838]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:36 localhost systemd-sysv-generator[191841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=898 DF PROTO=TCP SPT=49174 DPT=9102 SEQ=2722967410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6587B8D0000000001030307) Feb 1 04:18:37 localhost python3.9[191957]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 1 04:18:37 localhost systemd[1]: Reloading. Feb 1 04:18:37 localhost systemd-sysv-generator[191991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:37 localhost systemd-rc-local-generator[191986]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:38 localhost python3.9[192106]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 1 04:18:38 localhost systemd[1]: Reloading. Feb 1 04:18:38 localhost systemd-rc-local-generator[192131]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:38 localhost systemd-sysv-generator[192135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23653 DF PROTO=TCP SPT=39478 DPT=9100 SEQ=2595214617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658880D0000000001030307) Feb 1 04:18:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:18:40 localhost podman[192162]: 2026-02-01 09:18:40.872901946 +0000 UTC m=+0.086427108 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 1 04:18:40 localhost podman[192162]: 2026-02-01 09:18:40.918027965 +0000 UTC m=+0.131553147 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:18:40 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:18:41 localhost python3.9[192279]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:18:41.734 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:18:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:18:41.736 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:18:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:18:41.736 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:18:41 localhost systemd[1]: Reloading. Feb 1 04:18:41 localhost systemd-rc-local-generator[192309]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:41 localhost systemd-sysv-generator[192312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:42 localhost python3.9[192427]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:42 localhost systemd[1]: Reloading. Feb 1 04:18:42 localhost systemd-rc-local-generator[192450]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:42 localhost systemd-sysv-generator[192454]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60405 DF PROTO=TCP SPT=41704 DPT=9101 SEQ=4163484514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658950E0000000001030307) Feb 1 04:18:43 localhost python3.9[192575]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:18:44 localhost systemd[1]: Reloading. Feb 1 04:18:44 localhost podman[192578]: 2026-02-01 09:18:44.098832572 +0000 UTC m=+0.100079007 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:18:44 localhost systemd-rc-local-generator[192623]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:44 localhost podman[192578]: 2026-02-01 09:18:44.134118263 +0000 UTC m=+0.135364698 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:18:44 localhost systemd-sysv-generator[192628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:44 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:18:45 localhost python3.9[192743]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49548 DF PROTO=TCP SPT=45860 DPT=9882 SEQ=2523180970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6589D0D0000000001030307) Feb 1 04:18:45 localhost python3.9[192856]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:46 localhost systemd[1]: Reloading. Feb 1 04:18:46 localhost systemd-sysv-generator[192889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:46 localhost systemd-rc-local-generator[192883]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=900 DF PROTO=TCP SPT=49174 DPT=9102 SEQ=2722967410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658AB0D0000000001030307) Feb 1 04:18:49 localhost python3.9[193005]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 1 04:18:49 localhost systemd[1]: Reloading. Feb 1 04:18:49 localhost systemd-rc-local-generator[193029]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:18:49 localhost systemd-sysv-generator[193033]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:18:50 localhost python3.9[193154]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:51 localhost sshd[193268]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:18:51 localhost python3.9[193267]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:52 localhost python3.9[193382]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23655 DF PROTO=TCP SPT=39478 DPT=9100 SEQ=2595214617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658B90D0000000001030307) Feb 1 04:18:53 localhost python3.9[193495]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60407 DF PROTO=TCP SPT=41704 DPT=9101 SEQ=4163484514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658C50E0000000001030307) Feb 1 04:18:55 localhost python3.9[193608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:56 localhost python3.9[193721]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:57 localhost python3.9[193834]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:18:59 localhost python3.9[193947]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:19:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12723 DF PROTO=TCP SPT=49046 DPT=9882 SEQ=55718926 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658D70C0000000001030307) Feb 1 04:19:00 localhost python3.9[194060]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:19:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12724 DF PROTO=TCP SPT=49046 DPT=9882 SEQ=55718926 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658DB0D0000000001030307) Feb 1 04:19:01 localhost python3.9[194173]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:19:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12725 DF PROTO=TCP SPT=49046 DPT=9882 SEQ=55718926 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658E30D0000000001030307) Feb 1 04:19:03 localhost python3.9[194286]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:19:04 localhost python3.9[194399]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:19:05 localhost python3.9[194512]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:19:06 localhost python3.9[194625]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 1 04:19:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42923 DF PROTO=TCP SPT=47972 DPT=9102 SEQ=776110545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658F08D0000000001030307) Feb 1 04:19:07 localhost python3.9[194738]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:19:07 localhost python3.9[194884]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:19:08 localhost python3.9[195027]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:19:09 localhost python3.9[195154]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:19:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15513 DF PROTO=TCP SPT=48760 DPT=9100 SEQ=3846254848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA658FD0D0000000001030307) Feb 1 04:19:09 localhost python3.9[195265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:19:10 localhost python3.9[195375]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:19:11 localhost python3.9[195483]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:19:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:19:11 localhost podman[195501]: 2026-02-01 09:19:11.877841396 +0000 UTC m=+0.088745356 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:19:11 localhost podman[195501]: 2026-02-01 09:19:11.963668255 +0000 UTC m=+0.174572245 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Feb 1 04:19:11 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:19:12 localhost python3.9[195616]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58712 DF PROTO=TCP SPT=58780 DPT=9101 SEQ=2487244976 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6590A4D0000000001030307) Feb 1 04:19:13 localhost python3.9[195706]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769937552.003891-1664-193907424561782/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:14 localhost python3.9[195816]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:19:14 localhost podman[195907]: 2026-02-01 09:19:14.554893556 +0000 UTC m=+0.083044909 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:19:14 localhost podman[195907]: 2026-02-01 09:19:14.565164746 +0000 UTC m=+0.093316099 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:19:14 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:19:14 localhost python3.9[195906]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769937553.5368953-1664-89354729102577/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:15 localhost python3.9[196034]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12727 DF PROTO=TCP SPT=49046 DPT=9882 SEQ=55718926 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659130D0000000001030307) Feb 1 04:19:15 localhost python3.9[196124]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769937554.815992-1664-150770818330539/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:16 localhost python3.9[196234]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:17 localhost python3.9[196324]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769937556.0009978-1664-17648239140229/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:17 localhost python3.9[196434]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:18 localhost python3.9[196524]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769937557.253724-1664-137797485582053/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42925 DF PROTO=TCP SPT=47972 DPT=9102 SEQ=776110545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659210D0000000001030307) Feb 1 04:19:19 localhost python3.9[196634]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:19 localhost python3.9[196724]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769937558.5369935-1664-139037321463834/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:20 localhost python3.9[196834]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:20 localhost python3.9[196922]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769937559.7609825-1664-58800556915347/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:21 localhost python3.9[197032]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:22 localhost python3.9[197122]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769937560.942703-1664-79136557070601/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15515 DF PROTO=TCP SPT=48760 DPT=9100 SEQ=3846254848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6592D0D0000000001030307) Feb 1 04:19:24 localhost python3.9[197232]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:25 localhost python3.9[197342]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58714 DF PROTO=TCP SPT=58780 DPT=9101 SEQ=2487244976 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6593B0D0000000001030307) Feb 1 04:19:25 localhost python3.9[197452]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:26 localhost python3.9[197562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:27 localhost python3.9[197672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:27 localhost python3.9[197782]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:28 localhost python3.9[197892]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:29 localhost python3.9[198002]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:29 localhost python3.9[198112]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39928 DF PROTO=TCP SPT=37178 DPT=9882 SEQ=6424101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6594C3D0000000001030307) Feb 1 04:19:30 localhost python3.9[198222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:30 localhost python3.9[198332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39929 DF PROTO=TCP SPT=37178 DPT=9882 SEQ=6424101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659504D0000000001030307) Feb 1 04:19:31 localhost python3.9[198442]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:32 localhost python3.9[198552]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:32 localhost python3.9[198662]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39930 DF PROTO=TCP SPT=37178 DPT=9882 SEQ=6424101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659584D0000000001030307) Feb 1 04:19:33 localhost python3.9[198772]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:34 localhost python3.9[198882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:34 localhost python3.9[198970]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937573.9669535-2328-130868539584551/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:35 localhost python3.9[199080]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:36 localhost python3.9[199168]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937575.1464984-2328-159459720077710/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8408 DF PROTO=TCP SPT=44364 DPT=9102 SEQ=2839907613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65965CD0000000001030307) Feb 1 04:19:36 localhost python3.9[199278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:37 localhost python3.9[199366]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937576.318512-2328-188162334281914/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:38 localhost python3.9[199476]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:38 localhost python3.9[199564]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937577.662323-2328-72492885746653/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:39 localhost python3.9[199674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10847 DF PROTO=TCP SPT=46148 DPT=9100 SEQ=1988782609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659724D0000000001030307) Feb 1 04:19:39 localhost python3.9[199762]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937578.957737-2328-24903482110414/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:40 localhost python3.9[199872]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:41 localhost python3.9[199960]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937580.0829058-2328-91860248564206/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:41 localhost python3.9[200070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:19:41.735 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:19:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:19:41.736 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:19:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:19:41.736 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:19:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:19:42 localhost systemd[1]: tmp-crun.BAI8TF.mount: Deactivated successfully. Feb 1 04:19:42 localhost podman[200159]: 2026-02-01 09:19:42.195049207 +0000 UTC m=+0.086426979 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:19:42 localhost podman[200159]: 2026-02-01 09:19:42.285757735 +0000 UTC m=+0.177135527 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:19:42 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:19:42 localhost python3.9[200158]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937581.2322366-2328-170818026424784/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:42 localhost python3.9[200292]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56094 DF PROTO=TCP SPT=47266 DPT=9101 SEQ=137913282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6597F8E0000000001030307) Feb 1 04:19:43 localhost python3.9[200380]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937582.4921122-2328-188115481102160/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:44 localhost python3.9[200490]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:44 localhost python3.9[200578]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937583.6608582-2328-39857604497030/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:19:44 localhost systemd[1]: tmp-crun.7UlvA5.mount: Deactivated successfully. Feb 1 04:19:44 localhost podman[200597]: 2026-02-01 09:19:44.871053291 +0000 UTC m=+0.082345942 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Feb 1 04:19:44 localhost podman[200597]: 2026-02-01 09:19:44.900390199 +0000 UTC m=+0.111682820 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:19:44 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:19:45 localhost python3.9[200706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39932 DF PROTO=TCP SPT=37178 DPT=9882 SEQ=6424101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659890D0000000001030307) Feb 1 04:19:45 localhost python3.9[200794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937584.811555-2328-76833092473147/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:46 localhost python3.9[200904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:47 localhost python3.9[200992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937586.0166574-2328-28537287082178/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:47 localhost python3.9[201102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:48 localhost python3.9[201190]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937587.2215085-2328-218006800649132/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8410 DF PROTO=TCP SPT=44364 DPT=9102 SEQ=2839907613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659950D0000000001030307) Feb 1 04:19:49 localhost python3.9[201300]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:49 localhost python3.9[201388]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937588.6736314-2328-128116854492691/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:50 localhost python3.9[201498]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:19:51 localhost python3.9[201586]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937589.923061-2328-2423269523355/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:19:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10849 DF PROTO=TCP SPT=46148 DPT=9100 SEQ=1988782609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659A30D0000000001030307) Feb 1 04:19:52 localhost python3.9[201694]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:19:53 localhost python3.9[201807]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Feb 1 04:19:55 localhost python3.9[201917]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:19:55 localhost systemd[1]: Reloading. Feb 1 04:19:55 localhost systemd-rc-local-generator[201941]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:19:55 localhost systemd-sysv-generator[201944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:19:55 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:55 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:55 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:55 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:19:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56096 DF PROTO=TCP SPT=47266 DPT=9101 SEQ=137913282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659AF0D0000000001030307) Feb 1 04:19:55 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:55 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:55 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:55 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:55 localhost systemd[1]: Starting libvirt logging daemon socket... Feb 1 04:19:55 localhost systemd[1]: Listening on libvirt logging daemon socket. Feb 1 04:19:55 localhost systemd[1]: Starting libvirt logging daemon admin socket... Feb 1 04:19:55 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Feb 1 04:19:55 localhost systemd[1]: Starting libvirt logging daemon... Feb 1 04:19:55 localhost systemd[1]: Started libvirt logging daemon. Feb 1 04:19:56 localhost python3.9[202068]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:19:56 localhost systemd[1]: Reloading. Feb 1 04:19:57 localhost systemd-sysv-generator[202098]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:19:57 localhost systemd-rc-local-generator[202093]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:19:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:19:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:57 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Feb 1 04:19:57 localhost systemd[1]: Starting libvirt nodedev daemon socket... Feb 1 04:19:57 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Feb 1 04:19:57 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Feb 1 04:19:57 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Feb 1 04:19:57 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Feb 1 04:19:57 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Feb 1 04:19:57 localhost systemd[1]: Started libvirt nodedev daemon. Feb 1 04:19:57 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Feb 1 04:19:57 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Feb 1 04:19:57 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Feb 1 04:19:58 localhost python3.9[202247]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:19:58 localhost systemd[1]: Reloading. Feb 1 04:19:58 localhost systemd-rc-local-generator[202278]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:19:58 localhost systemd-sysv-generator[202283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:19:58 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:58 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:58 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:19:58 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:58 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:58 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:58 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:58 localhost systemd[1]: Starting libvirt proxy daemon socket... Feb 1 04:19:58 localhost systemd[1]: Listening on libvirt proxy daemon socket. Feb 1 04:19:58 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Feb 1 04:19:58 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Feb 1 04:19:58 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Feb 1 04:19:58 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Feb 1 04:19:58 localhost systemd[1]: Started libvirt proxy daemon. Feb 1 04:19:58 localhost setroubleshoot[202105]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l bc542b42-7abb-4b5c-9fa2-16a0c9696397 Feb 1 04:19:58 localhost setroubleshoot[202105]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Feb 1 04:19:58 localhost setroubleshoot[202105]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l bc542b42-7abb-4b5c-9fa2-16a0c9696397 Feb 1 04:19:58 localhost setroubleshoot[202105]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Feb 1 04:19:59 localhost python3.9[202425]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:19:59 localhost systemd[1]: Reloading. Feb 1 04:19:59 localhost systemd-sysv-generator[202455]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:19:59 localhost systemd-rc-local-generator[202448]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:19:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:19:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:19:59 localhost systemd[1]: Listening on libvirt locking daemon socket. Feb 1 04:19:59 localhost systemd[1]: Starting libvirt QEMU daemon socket... Feb 1 04:19:59 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 1 04:19:59 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Feb 1 04:19:59 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Feb 1 04:19:59 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Feb 1 04:19:59 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Feb 1 04:19:59 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Feb 1 04:19:59 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Feb 1 04:19:59 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Feb 1 04:19:59 localhost systemd[1]: Started libvirt QEMU daemon. Feb 1 04:20:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2742 DF PROTO=TCP SPT=51952 DPT=9882 SEQ=3964749355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659C16C0000000001030307) Feb 1 04:20:00 localhost python3.9[202599]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:20:00 localhost systemd[1]: Reloading. Feb 1 04:20:00 localhost systemd-rc-local-generator[202624]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:20:00 localhost systemd-sysv-generator[202629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:20:00 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:00 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:00 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:00 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:20:00 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:00 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:00 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:00 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:00 localhost systemd[1]: Starting libvirt secret daemon socket... Feb 1 04:20:00 localhost systemd[1]: Listening on libvirt secret daemon socket. Feb 1 04:20:00 localhost systemd[1]: Starting libvirt secret daemon admin socket... Feb 1 04:20:00 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Feb 1 04:20:00 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Feb 1 04:20:00 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Feb 1 04:20:00 localhost systemd[1]: Started libvirt secret daemon. Feb 1 04:20:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2743 DF PROTO=TCP SPT=51952 DPT=9882 SEQ=3964749355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659C58D0000000001030307) Feb 1 04:20:01 localhost python3.9[202770]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:02 localhost python3.9[202880]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 1 04:20:03 localhost python3.9[202990]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:20:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2744 DF PROTO=TCP SPT=51952 DPT=9882 SEQ=3964749355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659CD8D0000000001030307) Feb 1 04:20:03 localhost python3.9[203102]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 1 04:20:04 localhost python3.9[203210]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:05 localhost python3.9[203296]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937604.3187819-3192-95176039703118/.source.xml follow=False _original_basename=secret.xml.j2 checksum=8e79ccae86c93336b3974fdc11794b13702e9d6a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:06 localhost python3.9[203406]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 33fac0b9-80c7-560f-918a-c92d3021ca1e#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:20:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5758 DF PROTO=TCP SPT=33652 DPT=9102 SEQ=4144422978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659DB0D0000000001030307) Feb 1 04:20:06 localhost python3.9[203526]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:08 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Feb 1 04:20:08 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Feb 1 04:20:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42509 DF PROTO=TCP SPT=42254 DPT=9100 SEQ=3837395286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659E78D0000000001030307) Feb 1 04:20:11 localhost python3.9[203950]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:11 localhost python3.9[204060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:20:12 localhost podman[204149]: 2026-02-01 09:20:12.424962637 +0000 UTC m=+0.076776096 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:20:12 localhost podman[204149]: 2026-02-01 09:20:12.530786711 +0000 UTC m=+0.182600200 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true) Feb 1 04:20:12 localhost python3.9[204148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937611.5022473-3357-166113692975764/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:12 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:20:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23912 DF PROTO=TCP SPT=35626 DPT=9101 SEQ=1688313427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659F48D0000000001030307) Feb 1 04:20:13 localhost python3.9[204284]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:13 localhost sshd[204340]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:20:14 localhost python3.9[204396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:14 localhost python3.9[204453]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:20:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2746 DF PROTO=TCP SPT=51952 DPT=9882 SEQ=3964749355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA659FD0D0000000001030307) Feb 1 04:20:15 localhost systemd[1]: tmp-crun.crOu9p.mount: Deactivated successfully. Feb 1 04:20:15 localhost podman[204564]: 2026-02-01 09:20:15.300483489 +0000 UTC m=+0.093121028 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true) Feb 1 04:20:15 localhost podman[204564]: 2026-02-01 09:20:15.332733558 +0000 UTC m=+0.125371127 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 1 04:20:15 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:20:15 localhost python3.9[204563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:15 localhost python3.9[204636]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2w8x5h1a recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:16 localhost python3.9[204746]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:17 localhost python3.9[204803]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:17 localhost python3.9[204913]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:20:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5760 DF PROTO=TCP SPT=33652 DPT=9102 SEQ=4144422978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A0B0E0000000001030307) Feb 1 04:20:18 localhost python3[205024]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 1 04:20:19 localhost python3.9[205134]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:20 localhost python3.9[205191]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:21 localhost python3.9[205301]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42511 DF PROTO=TCP SPT=42254 DPT=9100 SEQ=3837395286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A170E0000000001030307) Feb 1 04:20:22 localhost python3.9[205391]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937620.487486-3623-221950494658288/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:23 localhost python3.9[205501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:23 localhost python3.9[205558]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:25 localhost python3.9[205668]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23914 DF PROTO=TCP SPT=35626 DPT=9101 SEQ=1688313427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A250D0000000001030307) Feb 1 04:20:25 localhost python3.9[205725]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:26 localhost python3.9[205835]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:27 localhost python3.9[205925]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769937626.0404294-3741-66577289254133/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:27 localhost python3.9[206035]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:28 localhost python3.9[206145]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:20:29 localhost python3.9[206258]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61744 DF PROTO=TCP SPT=37710 DPT=9882 SEQ=1595016040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A369F0000000001030307) Feb 1 04:20:30 localhost python3.9[206368]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:20:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61745 DF PROTO=TCP SPT=37710 DPT=9882 SEQ=1595016040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A3A8D0000000001030307) Feb 1 04:20:31 localhost python3.9[206479]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:20:32 localhost python3.9[206591]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:20:32 localhost python3.9[206705]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61746 DF PROTO=TCP SPT=37710 DPT=9882 SEQ=1595016040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A428D0000000001030307) Feb 1 04:20:33 localhost python3.9[206815]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:34 localhost python3.9[206903]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937633.035836-3957-192710607138931/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:34 localhost python3.9[207013]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:35 localhost python3.9[207101]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937634.3641233-4002-24455528867895/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37854 DF PROTO=TCP SPT=33554 DPT=9102 SEQ=3480415361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A504D0000000001030307) Feb 1 04:20:36 localhost python3.9[207211]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:20:37 localhost python3.9[207299]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937636.4695392-4047-268693152017653/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:20:38 localhost python3.9[207409]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:20:38 localhost systemd[1]: Reloading. Feb 1 04:20:38 localhost systemd-sysv-generator[207436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:20:38 localhost systemd-rc-local-generator[207431]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:20:38 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:38 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:38 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:38 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:20:38 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:38 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:38 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:38 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:38 localhost systemd[1]: Reached target edpm_libvirt.target. Feb 1 04:20:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43146 DF PROTO=TCP SPT=35024 DPT=9100 SEQ=1318914317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A5CCD0000000001030307) Feb 1 04:20:40 localhost python3.9[207558]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 1 04:20:40 localhost systemd[1]: Reloading. Feb 1 04:20:40 localhost systemd-rc-local-generator[207580]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:20:40 localhost systemd-sysv-generator[207586]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: Reloading. Feb 1 04:20:40 localhost systemd-rc-local-generator[207623]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:20:40 localhost systemd-sysv-generator[207627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:40 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:20:41 localhost systemd[1]: session-52.scope: Deactivated successfully. Feb 1 04:20:41 localhost systemd[1]: session-52.scope: Consumed 3min 21.787s CPU time. Feb 1 04:20:41 localhost systemd-logind[761]: Session 52 logged out. Waiting for processes to exit. Feb 1 04:20:41 localhost systemd-logind[761]: Removed session 52. Feb 1 04:20:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:20:41.736 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:20:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:20:41.738 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:20:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:20:41.738 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:20:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:20:42 localhost systemd[1]: tmp-crun.RsZaFf.mount: Deactivated successfully. Feb 1 04:20:42 localhost podman[207650]: 2026-02-01 09:20:42.859772127 +0000 UTC m=+0.070927811 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 1 04:20:42 localhost podman[207650]: 2026-02-01 09:20:42.936747658 +0000 UTC m=+0.147903402 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 1 04:20:42 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:20:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46123 DF PROTO=TCP SPT=42572 DPT=9101 SEQ=1222515291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A69CE0000000001030307) Feb 1 04:20:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61748 DF PROTO=TCP SPT=37710 DPT=9882 SEQ=1595016040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A730E0000000001030307) Feb 1 04:20:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:20:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5433 writes, 23K keys, 5433 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5433 writes, 751 syncs, 7.23 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aabc1aa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aabc1aa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Feb 1 04:20:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:20:45 localhost systemd[1]: tmp-crun.TSmGFU.mount: Deactivated successfully. Feb 1 04:20:45 localhost podman[207675]: 2026-02-01 09:20:45.875268476 +0000 UTC m=+0.092722527 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Feb 1 04:20:45 localhost podman[207675]: 2026-02-01 09:20:45.878771691 +0000 UTC m=+0.096225722 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 1 04:20:45 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:20:46 localhost sshd[207693]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:20:46 localhost systemd-logind[761]: New session 53 of user zuul. Feb 1 04:20:46 localhost systemd[1]: Started Session 53 of User zuul. Feb 1 04:20:47 localhost python3.9[207804]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:20:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37856 DF PROTO=TCP SPT=33554 DPT=9102 SEQ=3480415361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A810E0000000001030307) Feb 1 04:20:49 localhost python3.9[207916]: ansible-ansible.builtin.service_facts Invoked Feb 1 04:20:49 localhost network[207933]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:20:49 localhost network[207934]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:20:49 localhost network[207935]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:20:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:20:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5223 writes, 23K keys, 5223 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5223 writes, 658 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.011 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55797eeb82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55797eeb82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Feb 1 04:20:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43148 DF PROTO=TCP SPT=35024 DPT=9100 SEQ=1318914317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A8D0D0000000001030307) Feb 1 04:20:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:20:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46125 DF PROTO=TCP SPT=42572 DPT=9101 SEQ=1222515291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65A990D0000000001030307) Feb 1 04:20:55 localhost python3.9[208167]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:20:56 localhost python3.9[208230]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:21:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33076 DF PROTO=TCP SPT=58768 DPT=9882 SEQ=1104122674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65AABCD0000000001030307) Feb 1 04:21:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33077 DF PROTO=TCP SPT=58768 DPT=9882 SEQ=1104122674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65AAFCD0000000001030307) Feb 1 04:21:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33078 DF PROTO=TCP SPT=58768 DPT=9882 SEQ=1104122674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65AB7CE0000000001030307) Feb 1 04:21:06 localhost python3.9[208342]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:21:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31360 DF PROTO=TCP SPT=44308 DPT=9102 SEQ=3300110845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65AC54D0000000001030307) Feb 1 04:21:07 localhost python3.9[208454]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:08 localhost python3.9[208564]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:21:08 localhost python3.9[208675]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:21:09 localhost python3.9[208786]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:21:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63306 DF PROTO=TCP SPT=56072 DPT=9100 SEQ=3317580477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65AD1CD0000000001030307) Feb 1 04:21:10 localhost python3.9[208897]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:21:11 localhost python3.9[209045]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:12 localhost python3.9[209186]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:21:12 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Feb 1 04:21:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47621 DF PROTO=TCP SPT=48686 DPT=9101 SEQ=2534758677 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65ADF0D0000000001030307) Feb 1 04:21:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:21:13 localhost podman[209226]: 2026-02-01 09:21:13.876695393 +0000 UTC m=+0.087761738 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 04:21:13 localhost podman[209226]: 2026-02-01 09:21:13.920858106 +0000 UTC m=+0.131924471 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:21:13 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:21:14 localhost python3.9[209343]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:21:14 localhost systemd[1]: Reloading. Feb 1 04:21:14 localhost systemd-rc-local-generator[209368]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:21:14 localhost systemd-sysv-generator[209375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:21:14 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:14 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:21:14 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:14 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:14 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:14 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:15 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Feb 1 04:21:15 localhost systemd[1]: Starting Open-iSCSI... Feb 1 04:21:15 localhost iscsid[209384]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 1 04:21:15 localhost iscsid[209384]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 1 04:21:15 localhost iscsid[209384]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 1 04:21:15 localhost iscsid[209384]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 1 04:21:15 localhost iscsid[209384]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 1 04:21:15 localhost iscsid[209384]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 1 04:21:15 localhost iscsid[209384]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf Feb 1 04:21:15 localhost systemd[1]: Started Open-iSCSI. Feb 1 04:21:15 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown... Feb 1 04:21:15 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown. Feb 1 04:21:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33080 DF PROTO=TCP SPT=58768 DPT=9882 SEQ=1104122674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65AE70D0000000001030307) Feb 1 04:21:16 localhost python3.9[209493]: ansible-ansible.builtin.service_facts Invoked Feb 1 04:21:16 localhost network[209510]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:21:16 localhost network[209511]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:21:16 localhost network[209512]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:21:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:21:16 localhost podman[209518]: 2026-02-01 09:21:16.344432016 +0000 UTC m=+0.069963775 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:21:16 localhost podman[209518]: 2026-02-01 09:21:16.37860605 +0000 UTC m=+0.104137869 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Feb 1 04:21:16 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:21:17 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Feb 1 04:21:18 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Feb 1 04:21:18 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Feb 1 04:21:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:21:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31362 DF PROTO=TCP SPT=44308 DPT=9102 SEQ=3300110845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65AF50D0000000001030307) Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 5698bc1e-e2fe-4086-9a1f-bb7435d28bc8 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 5698bc1e-e2fe-4086-9a1f-bb7435d28bc8 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 5698bc1e-e2fe-4086-9a1f-bb7435d28bc8 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 5698bc1e-e2fe-4086-9a1f-bb7435d28bc8 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 5698bc1e-e2fe-4086-9a1f-bb7435d28bc8 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 5698bc1e-e2fe-4086-9a1f-bb7435d28bc8 Feb 1 04:21:19 localhost setroubleshoot[209544]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 1 04:21:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63308 DF PROTO=TCP SPT=56072 DPT=9100 SEQ=3317580477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B010D0000000001030307) Feb 1 04:21:22 localhost python3.9[209780]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:21:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47623 DF PROTO=TCP SPT=48686 DPT=9101 SEQ=2534758677 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B0F0D0000000001030307) Feb 1 04:21:26 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 04:21:26 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 04:21:26 localhost systemd[1]: Reloading. Feb 1 04:21:26 localhost systemd-rc-local-generator[209826]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:21:26 localhost systemd-sysv-generator[209830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:21:26 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:26 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:26 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:26 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:21:26 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:26 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:26 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:26 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:26 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 04:21:26 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 04:21:26 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 04:21:26 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 04:21:26 localhost systemd[1]: run-r62452d3adea44937a329029c597e48b9.service: Deactivated successfully. Feb 1 04:21:26 localhost systemd[1]: run-reacb2bd551814c8baef46cd863ab2523.service: Deactivated successfully. Feb 1 04:21:28 localhost python3.9[210073]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 1 04:21:29 localhost python3.9[210183]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Feb 1 04:21:29 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Feb 1 04:21:29 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Feb 1 04:21:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61713 DF PROTO=TCP SPT=33556 DPT=9882 SEQ=3908121571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B20FD0000000001030307) Feb 1 04:21:30 localhost python3.9[210298]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:21:30 localhost python3.9[210386]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937689.6352932-486-153305824394531/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61714 DF PROTO=TCP SPT=33556 DPT=9882 SEQ=3908121571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B250D0000000001030307) Feb 1 04:21:31 localhost python3.9[210496]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:32 localhost python3.9[210606]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:21:32 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 1 04:21:32 localhost systemd[1]: Stopped Load Kernel Modules. Feb 1 04:21:32 localhost systemd[1]: Stopping Load Kernel Modules... Feb 1 04:21:32 localhost systemd[1]: Starting Load Kernel Modules... Feb 1 04:21:32 localhost systemd-modules-load[210610]: Module 'msr' is built in Feb 1 04:21:32 localhost systemd[1]: Finished Load Kernel Modules. Feb 1 04:21:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61715 DF PROTO=TCP SPT=33556 DPT=9882 SEQ=3908121571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B2D0D0000000001030307) Feb 1 04:21:34 localhost python3.9[210720]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:21:35 localhost sshd[210777]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:21:35 localhost python3.9[210833]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:21:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6988 DF PROTO=TCP SPT=45186 DPT=9102 SEQ=882411208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B3A8D0000000001030307) Feb 1 04:21:36 localhost python3.9[210943]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:21:37 localhost python3.9[211031]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937696.3369985-638-50303793189658/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:38 localhost python3.9[211141]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:21:38 localhost python3.9[211252]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:39 localhost python3.9[211362]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19980 DF PROTO=TCP SPT=59126 DPT=9100 SEQ=3658720964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B470D0000000001030307) Feb 1 04:21:40 localhost python3.9[211472]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:41 localhost python3.9[211582]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:21:41.737 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:21:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:21:41.738 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:21:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:21:41.738 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:21:41 localhost python3.9[211692]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:42 localhost python3.9[211802]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2691 DF PROTO=TCP SPT=53186 DPT=9101 SEQ=945611142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B544D0000000001030307) Feb 1 04:21:43 localhost python3.9[211912]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:21:44 localhost systemd[1]: tmp-crun.pMXc6u.mount: Deactivated successfully. Feb 1 04:21:44 localhost podman[212023]: 2026-02-01 09:21:44.183236504 +0000 UTC m=+0.121415897 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3) Feb 1 04:21:44 localhost python3.9[212022]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:21:44 localhost podman[212023]: 2026-02-01 09:21:44.252752176 +0000 UTC m=+0.190931569 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 1 04:21:44 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:21:45 localhost python3.9[212160]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:21:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61717 DF PROTO=TCP SPT=33556 DPT=9882 SEQ=3908121571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B5D0D0000000001030307) Feb 1 04:21:45 localhost python3.9[212271]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:21:45 localhost systemd[1]: Listening on multipathd control socket. Feb 1 04:21:46 localhost python3.9[212385]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:21:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:21:47 localhost systemd[1]: Starting Wait for udev To Complete Device Initialization... Feb 1 04:21:47 localhost podman[212387]: 2026-02-01 09:21:47.040421068 +0000 UTC m=+0.063429630 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:21:47 localhost udevadm[212402]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in. Feb 1 04:21:47 localhost systemd[1]: Finished Wait for udev To Complete Device Initialization. Feb 1 04:21:47 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 1 04:21:47 localhost podman[212387]: 2026-02-01 09:21:47.074610252 +0000 UTC m=+0.097618784 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 1 04:21:47 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:21:47 localhost multipathd[212410]: --------start up-------- Feb 1 04:21:47 localhost multipathd[212410]: read /etc/multipath.conf Feb 1 04:21:47 localhost multipathd[212410]: path checkers start up Feb 1 04:21:47 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 1 04:21:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6990 DF PROTO=TCP SPT=45186 DPT=9102 SEQ=882411208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B6B0D0000000001030307) Feb 1 04:21:49 localhost python3.9[212527]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 1 04:21:49 localhost python3.9[212637]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Feb 1 04:21:50 localhost python3.9[212756]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:21:51 localhost python3.9[212844]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769937710.390496-1028-60450042753356/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19982 DF PROTO=TCP SPT=59126 DPT=9100 SEQ=3658720964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B770D0000000001030307) Feb 1 04:21:52 localhost python3.9[212954]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:21:53 localhost python3.9[213064]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:21:53 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 1 04:21:53 localhost systemd[1]: Stopped Load Kernel Modules. Feb 1 04:21:53 localhost systemd[1]: Stopping Load Kernel Modules... Feb 1 04:21:53 localhost systemd[1]: Starting Load Kernel Modules... Feb 1 04:21:53 localhost systemd-modules-load[213068]: Module 'msr' is built in Feb 1 04:21:53 localhost systemd[1]: Finished Load Kernel Modules. Feb 1 04:21:54 localhost python3.9[213178]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:21:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2693 DF PROTO=TCP SPT=53186 DPT=9101 SEQ=945611142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B850D0000000001030307) Feb 1 04:21:57 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Feb 1 04:21:58 localhost systemd[1]: Reloading. Feb 1 04:21:58 localhost systemd-rc-local-generator[213215]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:21:58 localhost systemd-sysv-generator[213218]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: Reloading. Feb 1 04:21:58 localhost systemd-sysv-generator[213253]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:21:58 localhost systemd-rc-local-generator[213248]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:58 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Feb 1 04:21:58 localhost systemd-logind[761]: Watching system buttons on /dev/input/event0 (Power Button) Feb 1 04:21:58 localhost systemd-logind[761]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 1 04:21:58 localhost lvm[213304]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 1 04:21:58 localhost lvm[213304]: VG ceph_vg1 finished Feb 1 04:21:58 localhost lvm[213303]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 1 04:21:58 localhost lvm[213303]: VG ceph_vg0 finished Feb 1 04:21:59 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 1 04:21:59 localhost systemd[1]: Starting man-db-cache-update.service... Feb 1 04:21:59 localhost systemd[1]: Reloading. Feb 1 04:21:59 localhost systemd-rc-local-generator[213349]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:21:59 localhost systemd-sysv-generator[213354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:21:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:21:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:21:59 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 1 04:22:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55400 DF PROTO=TCP SPT=54230 DPT=9882 SEQ=465484690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B962E0000000001030307) Feb 1 04:22:00 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 1 04:22:00 localhost systemd[1]: Finished man-db-cache-update.service. Feb 1 04:22:00 localhost systemd[1]: man-db-cache-update.service: Consumed 1.263s CPU time. Feb 1 04:22:00 localhost systemd[1]: run-ra1a3255d90394862aee02f1c58a61ad4.service: Deactivated successfully. Feb 1 04:22:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55401 DF PROTO=TCP SPT=54230 DPT=9882 SEQ=465484690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65B9A4E0000000001030307) Feb 1 04:22:01 localhost python3.9[214610]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:22:01 localhost multipathd[212410]: exit (signal) Feb 1 04:22:01 localhost multipathd[212410]: --------shut down------- Feb 1 04:22:01 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 1 04:22:01 localhost systemd[1]: multipathd.service: Deactivated successfully. Feb 1 04:22:01 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 1 04:22:01 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 1 04:22:01 localhost multipathd[214616]: --------start up-------- Feb 1 04:22:01 localhost multipathd[214616]: read /etc/multipath.conf Feb 1 04:22:01 localhost multipathd[214616]: path checkers start up Feb 1 04:22:01 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 1 04:22:02 localhost python3.9[214731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:22:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55402 DF PROTO=TCP SPT=54230 DPT=9882 SEQ=465484690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65BA24E0000000001030307) Feb 1 04:22:03 localhost python3.9[214845]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:04 localhost python3.9[214955]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:22:04 localhost systemd[1]: Reloading. Feb 1 04:22:04 localhost systemd-rc-local-generator[214980]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:22:04 localhost systemd-sysv-generator[214987]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:22:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:22:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:05 localhost python3.9[215100]: ansible-ansible.builtin.service_facts Invoked Feb 1 04:22:05 localhost network[215117]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:22:05 localhost network[215118]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:22:05 localhost network[215119]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:22:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57262 DF PROTO=TCP SPT=57006 DPT=9102 SEQ=3281454821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65BAFCD0000000001030307) Feb 1 04:22:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:22:09 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Feb 1 04:22:09 localhost systemd[1]: virtqemud.service: Deactivated successfully. Feb 1 04:22:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36907 DF PROTO=TCP SPT=36858 DPT=9100 SEQ=1360949448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65BBC4D0000000001030307) Feb 1 04:22:10 localhost python3.9[215354]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:22:12 localhost python3.9[215465]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:22:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31662 DF PROTO=TCP SPT=33722 DPT=9101 SEQ=2442379317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65BC94D0000000001030307) Feb 1 04:22:13 localhost python3.9[215612]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:22:13 localhost python3.9[215754]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:22:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:22:14 localhost podman[215784]: 2026-02-01 09:22:14.872535404 +0000 UTC m=+0.084207229 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:22:14 localhost podman[215784]: 2026-02-01 09:22:14.938686946 +0000 UTC m=+0.150358751 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller) Feb 1 04:22:14 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:22:15 localhost python3.9[215889]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:22:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55404 DF PROTO=TCP SPT=54230 DPT=9882 SEQ=465484690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65BD30D0000000001030307) Feb 1 04:22:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:22:17 localhost podman[216017]: 2026-02-01 09:22:17.870151808 +0000 UTC m=+0.082438826 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 1 04:22:17 localhost podman[216017]: 2026-02-01 09:22:17.879645575 +0000 UTC m=+0.091932593 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible) Feb 1 04:22:17 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:22:18 localhost python3.9[216028]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:22:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57264 DF PROTO=TCP SPT=57006 DPT=9102 SEQ=3281454821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65BDF0D0000000001030307) Feb 1 04:22:18 localhost python3.9[216148]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:22:19 localhost python3.9[216259]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:22:20 localhost python3.9[216370]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:21 localhost python3.9[216480]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:21 localhost python3.9[216590]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36909 DF PROTO=TCP SPT=36858 DPT=9100 SEQ=1360949448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65BED0D0000000001030307) Feb 1 04:22:22 localhost python3.9[216700]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:23 localhost python3.9[216810]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:23 localhost python3.9[216920]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:24 localhost python3.9[217030]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:25 localhost python3.9[217140]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31664 DF PROTO=TCP SPT=33722 DPT=9101 SEQ=2442379317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65BF90D0000000001030307) Feb 1 04:22:25 localhost python3.9[217250]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:26 localhost python3.9[217360]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:26 localhost python3.9[217470]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:27 localhost python3.9[217580]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:28 localhost python3.9[217690]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:28 localhost python3.9[217800]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64378 DF PROTO=TCP SPT=57108 DPT=9882 SEQ=1145317472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C0B5D0000000001030307) Feb 1 04:22:30 localhost python3.9[217910]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:30 localhost python3.9[218020]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:22:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64379 DF PROTO=TCP SPT=57108 DPT=9882 SEQ=1145317472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C0F4D0000000001030307) Feb 1 04:22:32 localhost python3.9[218130]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:22:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64380 DF PROTO=TCP SPT=57108 DPT=9882 SEQ=1145317472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C174D0000000001030307) Feb 1 04:22:33 localhost python3.9[218240]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 1 04:22:34 localhost python3.9[218350]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:22:34 localhost systemd[1]: Reloading. Feb 1 04:22:34 localhost systemd-sysv-generator[218377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:22:34 localhost systemd-rc-local-generator[218372]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:22:34 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:34 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:34 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:34 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:22:34 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:34 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:34 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:34 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:22:35 localhost python3.9[218496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:22:36 localhost python3.9[218607]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:22:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39107 DF PROTO=TCP SPT=46486 DPT=9102 SEQ=2244556675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C250D0000000001030307) Feb 1 04:22:36 localhost python3.9[218718]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:22:37 localhost python3.9[218829]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:22:38 localhost python3.9[218940]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:22:38 localhost python3.9[219051]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:22:39 localhost python3.9[219162]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:22:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4924 DF PROTO=TCP SPT=52242 DPT=9100 SEQ=366248767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C318D0000000001030307) Feb 1 04:22:39 localhost python3.9[219273]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:22:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:22:41.738 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:22:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:22:41.740 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:22:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:22:41.740 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:22:42 localhost python3.9[219384]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43009 DF PROTO=TCP SPT=34352 DPT=9101 SEQ=4279428959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C3E8D0000000001030307) Feb 1 04:22:43 localhost python3.9[219494]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:44 localhost python3.9[219604]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:22:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64382 DF PROTO=TCP SPT=57108 DPT=9882 SEQ=1145317472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C470D0000000001030307) Feb 1 04:22:45 localhost podman[219715]: 2026-02-01 09:22:45.383384265 +0000 UTC m=+0.087424237 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3) Feb 1 04:22:45 localhost podman[219715]: 2026-02-01 09:22:45.449502367 +0000 UTC m=+0.153542359 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller) Feb 1 04:22:45 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:22:45 localhost python3.9[219714]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:46 localhost python3.9[219850]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:46 localhost python3.9[219960]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:47 localhost python3.9[220070]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:47 localhost python3.9[220180]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:22:48 localhost systemd[1]: tmp-crun.Ep6wEX.mount: Deactivated successfully. Feb 1 04:22:48 localhost podman[220291]: 2026-02-01 09:22:48.544409815 +0000 UTC m=+0.093731258 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:22:48 localhost podman[220291]: 2026-02-01 09:22:48.549069926 +0000 UTC m=+0.098391399 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:22:48 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:22:48 localhost python3.9[220290]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39109 DF PROTO=TCP SPT=46486 DPT=9102 SEQ=2244556675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C550D0000000001030307) Feb 1 04:22:49 localhost python3.9[220416]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:22:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4926 DF PROTO=TCP SPT=52242 DPT=9100 SEQ=366248767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C610D0000000001030307) Feb 1 04:22:54 localhost sshd[220434]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:22:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43011 DF PROTO=TCP SPT=34352 DPT=9101 SEQ=4279428959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C6F0E0000000001030307) Feb 1 04:22:56 localhost python3.9[220528]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Feb 1 04:22:57 localhost python3.9[220639]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Feb 1 04:22:58 localhost python3.9[220755]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604215.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Feb 1 04:22:59 localhost sshd[220781]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:22:59 localhost systemd-logind[761]: New session 54 of user zuul. Feb 1 04:22:59 localhost systemd[1]: Started Session 54 of User zuul. Feb 1 04:23:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43881 DF PROTO=TCP SPT=45998 DPT=9882 SEQ=1733190088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C808C0000000001030307) Feb 1 04:23:00 localhost systemd[1]: session-54.scope: Deactivated successfully. Feb 1 04:23:00 localhost systemd-logind[761]: Session 54 logged out. Waiting for processes to exit. Feb 1 04:23:00 localhost systemd-logind[761]: Removed session 54. Feb 1 04:23:00 localhost python3.9[220892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:23:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43882 DF PROTO=TCP SPT=45998 DPT=9882 SEQ=1733190088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C848D0000000001030307) Feb 1 04:23:01 localhost python3.9[220978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937780.2323384-2612-110943075201031/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:23:02 localhost python3.9[221086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:23:02 localhost python3.9[221141]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:23:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43883 DF PROTO=TCP SPT=45998 DPT=9882 SEQ=1733190088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C8C8E0000000001030307) Feb 1 04:23:03 localhost python3.9[221249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:23:03 localhost python3.9[221335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937782.6842198-2612-254015808744796/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:23:04 localhost python3.9[221443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:23:04 localhost python3.9[221529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937783.8872705-2612-113418659411933/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=f97201355591685d5a25f9693d35e9cd6d9ded96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:23:05 localhost python3.9[221637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:23:06 localhost python3.9[221723]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937785.0461545-2612-200071512002209/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:23:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14505 DF PROTO=TCP SPT=35086 DPT=9102 SEQ=155959959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65C9A0E0000000001030307) Feb 1 04:23:07 localhost python3.9[221831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:23:08 localhost python3.9[221917]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937786.166917-2612-236257260370081/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:23:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63998 DF PROTO=TCP SPT=56596 DPT=9100 SEQ=596335886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65CA68E0000000001030307) Feb 1 04:23:09 localhost python3.9[222027]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:23:10 localhost python3.9[222137]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:23:11 localhost python3.9[222247]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:23:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27010 DF PROTO=TCP SPT=47044 DPT=9101 SEQ=4075716098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65CB3CE0000000001030307) Feb 1 04:23:13 localhost python3.9[222359]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:23:14 localhost python3.9[222467]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:23:15 localhost python3.9[222577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:23:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43885 DF PROTO=TCP SPT=45998 DPT=9882 SEQ=1733190088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65CBD0E0000000001030307) Feb 1 04:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:23:15 localhost podman[222664]: 2026-02-01 09:23:15.871925021 +0000 UTC m=+0.087577091 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:23:15 localhost python3.9[222663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937794.988809-2987-235251796245736/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:23:15 localhost podman[222664]: 2026-02-01 09:23:15.957682495 +0000 UTC m=+0.173334605 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:23:15 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:23:16 localhost python3.9[222846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:23:17 localhost python3.9[222949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769937796.157896-3033-72625758565637/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:23:18 localhost python3.9[223077]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Feb 1 04:23:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:23:18 localhost systemd[1]: tmp-crun.qtk6FC.mount: Deactivated successfully. Feb 1 04:23:18 localhost podman[223111]: 2026-02-01 09:23:18.87447632 +0000 UTC m=+0.083933560 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:23:18 localhost podman[223111]: 2026-02-01 09:23:18.908785785 +0000 UTC m=+0.118243015 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:23:18 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:23:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14507 DF PROTO=TCP SPT=35086 DPT=9102 SEQ=155959959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65CCB0E0000000001030307) Feb 1 04:23:19 localhost python3.9[223205]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 1 04:23:20 localhost python3[223315]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False Feb 1 04:23:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64000 DF PROTO=TCP SPT=56596 DPT=9100 SEQ=596335886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65CD70E0000000001030307) Feb 1 04:23:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27012 DF PROTO=TCP SPT=47044 DPT=9101 SEQ=4075716098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65CE30E0000000001030307) Feb 1 04:23:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18676 DF PROTO=TCP SPT=42360 DPT=9882 SEQ=830503638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65CF5BD0000000001030307) Feb 1 04:23:30 localhost podman[223330]: 2026-02-01 09:23:20.797914777 +0000 UTC m=+0.044097316 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 1 04:23:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18677 DF PROTO=TCP SPT=42360 DPT=9882 SEQ=830503638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65CF9CD0000000001030307) Feb 1 04:23:31 localhost podman[223398]: Feb 1 04:23:31 localhost podman[223398]: 2026-02-01 09:23:31.20312572 +0000 UTC m=+0.081439333 container create 01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Feb 1 04:23:31 localhost podman[223398]: 2026-02-01 09:23:31.168472054 +0000 UTC m=+0.046785717 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 1 04:23:31 localhost python3[223315]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Feb 1 04:23:32 localhost python3.9[223545]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:23:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18678 DF PROTO=TCP SPT=42360 DPT=9882 SEQ=830503638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65D01CD0000000001030307) Feb 1 04:23:33 localhost python3.9[223657]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Feb 1 04:23:34 localhost python3.9[223767]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 1 04:23:35 localhost python3[223877]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False Feb 1 04:23:36 localhost python3[223877]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd",#012 "Digest": "sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:31:38.534497001Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1214548351,#012 "VirtualSize": 1214548351,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",#012 "sha256:6cac2e473d63cf2a9b8ef2ea3f4fbc7fb780c57021c3588efd56da3aa8cf8843",#012 "sha256:927dd86a09392106af537557be80232b7e8ca154daa00857c24fe20f9e550a50"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Feb 1 04:23:36 localhost podman[223928]: 2026-02-01 09:23:36.11811738 +0000 UTC m=+0.092786600 container remove 1543f157ce4423d7729f0ac3a9fa2d7a0f71c1b7ad555c21cc9ce4ed5abc095e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '848fbaed99314033c0982eb0cffd8af7-1296029e90a465a2201c8dc6f8be17e7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 1 04:23:36 localhost python3[223877]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Feb 1 04:23:36 localhost podman[223941]: Feb 1 04:23:36 localhost podman[223941]: 2026-02-01 09:23:36.218554581 +0000 UTC m=+0.083474995 container create 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 1 04:23:36 localhost podman[223941]: 2026-02-01 09:23:36.179754739 +0000 UTC m=+0.044675203 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 1 04:23:36 localhost python3[223877]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Feb 1 04:23:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48804 DF PROTO=TCP SPT=55492 DPT=9102 SEQ=3823228251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65D0F4D0000000001030307) Feb 1 04:23:37 localhost python3.9[224088]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:23:37 localhost python3.9[224200]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:23:38 localhost python3.9[224309]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769937817.9893043-3319-234522244925028/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:23:39 localhost python3.9[224364]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:23:39 localhost systemd[1]: Reloading. Feb 1 04:23:39 localhost systemd-rc-local-generator[224389]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:23:39 localhost systemd-sysv-generator[224392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:23:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:23:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63622 DF PROTO=TCP SPT=52826 DPT=9100 SEQ=1783760803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65D1BCE0000000001030307) Feb 1 04:23:40 localhost python3.9[224455]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:23:40 localhost systemd[1]: Reloading. Feb 1 04:23:40 localhost systemd-rc-local-generator[224480]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:23:40 localhost systemd-sysv-generator[224486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:23:40 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:40 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:40 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:40 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:23:40 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:40 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:40 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:40 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:23:40 localhost systemd[1]: Starting nova_compute container... Feb 1 04:23:40 localhost systemd[1]: Started libcrun container. Feb 1 04:23:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:40 localhost podman[224496]: 2026-02-01 09:23:40.639497973 +0000 UTC m=+0.126666953 container init 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:23:40 localhost podman[224496]: 2026-02-01 09:23:40.649180898 +0000 UTC m=+0.136349878 container start 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:23:40 localhost podman[224496]: nova_compute Feb 1 04:23:40 localhost nova_compute[224510]: + sudo -E kolla_set_configs Feb 1 04:23:40 localhost systemd[1]: Started nova_compute container. Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Validating config file Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying service configuration files Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Deleting /etc/ceph Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Creating directory /etc/ceph Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/ceph Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Writing out command to execute Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:23:40 localhost nova_compute[224510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 1 04:23:40 localhost nova_compute[224510]: ++ cat /run_command Feb 1 04:23:40 localhost nova_compute[224510]: + CMD=nova-compute Feb 1 04:23:40 localhost nova_compute[224510]: + ARGS= Feb 1 04:23:40 localhost nova_compute[224510]: + sudo kolla_copy_cacerts Feb 1 04:23:40 localhost nova_compute[224510]: + [[ ! -n '' ]] Feb 1 04:23:40 localhost nova_compute[224510]: + . kolla_extend_start Feb 1 04:23:40 localhost nova_compute[224510]: Running command: 'nova-compute' Feb 1 04:23:40 localhost nova_compute[224510]: + echo 'Running command: '\''nova-compute'\''' Feb 1 04:23:40 localhost nova_compute[224510]: + umask 0022 Feb 1 04:23:40 localhost nova_compute[224510]: + exec nova-compute Feb 1 04:23:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:23:41.739 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:23:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:23:41.739 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:23:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:23:41.740 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:23:42 localhost nova_compute[224510]: 2026-02-01 09:23:42.445 224514 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 1 04:23:42 localhost nova_compute[224510]: 2026-02-01 09:23:42.445 224514 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 1 04:23:42 localhost nova_compute[224510]: 2026-02-01 09:23:42.445 224514 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 1 04:23:42 localhost nova_compute[224510]: 2026-02-01 09:23:42.445 224514 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 1 04:23:42 localhost nova_compute[224510]: 2026-02-01 09:23:42.560 224514 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:23:42 localhost nova_compute[224510]: 2026-02-01 09:23:42.582 224514 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:23:42 localhost nova_compute[224510]: 2026-02-01 09:23:42.582 224514 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 1 04:23:42 localhost python3.9[224634]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.013 224514 INFO nova.virt.driver [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.128 224514 INFO nova.compute.provider_config [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.135 224514 WARNING nova.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.136 224514 DEBUG oslo_concurrency.lockutils [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.136 224514 DEBUG oslo_concurrency.lockutils [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.136 224514 DEBUG oslo_concurrency.lockutils [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.136 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.136 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.137 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.137 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.137 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.137 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.137 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.137 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.137 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.137 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.138 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.138 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.138 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.138 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.138 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.138 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.138 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.139 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.139 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.139 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] console_host = np0005604215.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.139 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.139 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.139 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.139 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.139 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.140 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.140 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.140 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.140 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.140 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.140 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.140 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.141 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.141 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.141 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.141 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.141 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.141 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.141 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] host = np0005604215.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.142 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.142 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.142 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.142 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.142 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.142 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.142 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.143 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.143 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.143 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.143 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.143 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.143 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.143 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.143 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.144 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.144 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.144 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.144 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.144 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.144 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.144 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.145 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.145 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.145 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.145 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.145 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.145 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.145 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.145 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.145 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.146 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.146 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.146 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.146 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.146 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.146 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.146 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.147 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.147 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.147 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.147 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.147 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.147 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.147 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.147 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.148 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.148 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.148 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.148 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.148 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.148 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.148 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.149 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.149 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.149 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.149 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.149 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.149 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.149 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.149 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.150 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.150 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.150 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.150 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.150 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.150 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.150 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.150 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.151 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.151 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.151 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.151 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.151 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.151 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.151 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.151 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.152 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.152 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.152 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.152 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.152 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.152 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.152 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.153 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.153 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.153 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.153 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.153 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.153 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.153 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.153 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.154 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.154 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.154 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63466 DF PROTO=TCP SPT=52822 DPT=9101 SEQ=2959385660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65D290D0000000001030307) Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.154 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.154 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.154 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.154 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.155 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.155 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.155 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.155 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.155 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.155 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.155 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.155 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.156 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.156 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.156 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.156 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.156 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.156 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.156 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.157 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.157 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.157 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.157 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.157 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.157 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.157 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.158 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.158 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.158 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.158 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.158 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.158 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.158 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.159 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.159 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.159 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.159 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.159 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.159 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.159 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.159 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.160 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.160 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.160 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.160 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.160 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.160 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.160 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.161 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.161 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.161 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.161 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.161 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.161 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.161 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.161 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.162 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.162 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.162 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.162 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.162 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.162 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.162 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.163 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.163 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.163 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.163 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.163 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.163 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.163 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.164 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.164 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.164 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.164 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.164 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.164 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.164 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.165 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.165 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.165 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.165 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.165 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.165 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.165 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.165 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.166 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.166 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.166 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.166 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.166 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.166 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.166 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.167 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.167 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.167 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.167 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.167 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.167 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.167 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.168 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.168 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.168 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.168 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.168 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.168 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.169 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.169 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.169 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.169 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.169 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.169 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.170 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.170 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.170 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.170 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.170 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.170 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.170 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.170 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.171 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.171 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.171 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.171 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.171 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.171 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.171 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.172 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.172 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.172 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.172 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.172 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.173 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.173 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.173 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.173 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.173 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.173 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.173 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.174 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.174 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.174 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.174 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.174 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.174 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.174 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.174 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.175 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.175 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.175 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.175 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.175 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.175 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.175 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.176 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.176 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.176 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.176 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.176 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.176 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.176 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.177 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.177 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.177 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.177 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.177 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.177 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.177 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.178 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.178 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.178 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.178 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.178 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.178 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.178 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.179 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.179 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.179 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.179 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.179 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.179 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.180 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.180 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.180 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.180 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.180 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.180 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.180 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.181 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.181 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.181 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.181 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.181 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.181 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.181 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.182 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.182 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.182 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.182 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.182 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.182 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.182 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.182 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.183 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.183 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.183 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.183 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.183 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.183 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.183 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.184 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.184 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.184 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.184 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.184 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.184 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.184 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.185 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.185 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.185 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.185 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.185 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.185 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.185 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.186 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.186 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.186 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.186 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.186 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.186 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.186 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.187 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.187 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.187 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.187 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.187 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.187 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.187 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.188 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.188 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.188 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.188 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.188 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.188 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.188 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.188 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.189 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.189 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.189 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.189 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.189 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.189 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.189 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.190 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.190 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.190 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.190 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.190 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.190 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.190 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.191 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.191 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.191 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.191 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.191 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.191 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.191 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.192 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.192 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.192 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.192 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.192 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.192 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.192 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.192 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.193 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.193 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.193 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.193 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.193 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.193 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.193 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.193 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.194 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.194 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.194 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.194 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.194 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.194 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.194 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.195 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.195 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.195 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.195 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.195 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.195 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.195 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.195 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.196 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.196 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.196 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.196 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.196 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.196 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.196 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.197 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.197 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.197 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.197 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.197 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.197 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.197 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.197 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.198 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.198 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.198 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.198 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.198 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.198 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.198 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.199 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.199 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.199 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.199 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.199 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.199 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.199 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.200 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.200 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.200 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.200 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.200 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.200 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.200 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.200 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.201 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.201 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.201 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.201 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.201 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.201 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.201 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.202 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.202 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.202 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.202 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.202 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.202 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.202 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.202 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.203 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.203 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.203 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.203 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.203 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.203 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.203 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.204 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.204 224514 WARNING oslo_config.cfg [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 1 04:23:43 localhost nova_compute[224510]: live_migration_uri is deprecated for removal in favor of two other options that Feb 1 04:23:43 localhost nova_compute[224510]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 1 04:23:43 localhost nova_compute[224510]: and ``live_migration_inbound_addr`` respectively. Feb 1 04:23:43 localhost nova_compute[224510]: ). Its value may be silently ignored in the future.#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.204 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.204 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.204 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.204 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.205 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.205 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.205 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.205 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.205 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.205 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.205 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.206 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.206 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.206 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.206 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.206 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.206 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.206 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.207 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rbd_secret_uuid = 33fac0b9-80c7-560f-918a-c92d3021ca1e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.207 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.207 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.207 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.207 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.207 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.207 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.207 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.208 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.208 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.208 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.208 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.208 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.208 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.208 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.209 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.209 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.209 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.209 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.209 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.209 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.209 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.210 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.210 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.210 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.210 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.210 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.210 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.210 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.211 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.211 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.211 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.211 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.211 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.211 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.211 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.212 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.212 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.212 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.212 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.212 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.212 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.212 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.213 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.213 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.213 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.213 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.213 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.213 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.213 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.213 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.214 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.214 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.214 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.214 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.214 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.214 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.214 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.215 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.215 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.215 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.215 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.215 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.215 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.215 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.215 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.216 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.216 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.216 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.216 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.216 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.216 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.216 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.217 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.217 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.217 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.217 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.217 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.217 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.217 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.218 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.218 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.218 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.218 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.218 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.218 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.218 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.218 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.219 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.219 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.219 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.219 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.219 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.219 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.219 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.219 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.220 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.220 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.220 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.220 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.220 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.220 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.220 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.221 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.221 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.221 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.221 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.221 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.221 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.221 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.221 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.222 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.222 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.222 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.222 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.222 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.222 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.222 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.223 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.223 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.223 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.223 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.223 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.223 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.224 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.224 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.224 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.224 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.224 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.224 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.224 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.224 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.225 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.225 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.225 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.225 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.225 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.225 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.225 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.226 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.226 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.226 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.226 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.226 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.226 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.226 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.227 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.227 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.227 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.227 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.227 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.227 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.227 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.227 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.228 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.228 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.228 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.228 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.228 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.228 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.228 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.229 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.229 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.229 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.229 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.229 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.229 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.229 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.230 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.230 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.230 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.230 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.230 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.230 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.230 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.231 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.231 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.231 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.231 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.231 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.231 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.232 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.232 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.232 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.232 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.232 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.232 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.232 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.232 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.233 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.233 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.233 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.233 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.233 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.233 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.233 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.233 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.234 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.234 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.234 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.234 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.234 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.234 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.234 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.235 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.235 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.235 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.235 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.235 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.235 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.235 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.235 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.236 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.236 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.236 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.236 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.236 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.236 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.236 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.237 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.237 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.237 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.237 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.237 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.237 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.237 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.237 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.238 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.238 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.238 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.238 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.238 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.238 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.239 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.239 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.239 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.239 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.239 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.239 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.239 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.240 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.240 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.240 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.240 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.240 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.240 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.240 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.241 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.241 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.241 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.241 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.241 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.241 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.241 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.242 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.242 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.242 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.242 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.242 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.242 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.242 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.242 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.243 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.243 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.243 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.243 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.243 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.243 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.243 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.244 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.244 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.244 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.244 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.244 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.244 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.244 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.245 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.245 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.245 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.245 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.245 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.245 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.245 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.245 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.246 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.246 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.246 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.246 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.246 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.246 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.246 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.247 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.247 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.247 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.247 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.247 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.247 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.247 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.247 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.248 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.248 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.248 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.248 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.248 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.248 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.249 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.249 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.249 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.249 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.250 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.250 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.250 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.251 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.251 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.251 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.251 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.251 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.251 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.251 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.251 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.252 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.252 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.252 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.252 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.252 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.252 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.252 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.253 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.253 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.253 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.253 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.253 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.253 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.253 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.253 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.254 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.254 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.254 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.254 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.254 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.254 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.254 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.255 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.255 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.255 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.255 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.255 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.255 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.255 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.256 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.256 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.256 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.256 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.256 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.256 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.256 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.256 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.257 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.257 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.257 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.257 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.257 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.257 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.257 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.258 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.258 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.258 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.258 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.258 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.258 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.258 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.258 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.259 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.259 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.259 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.259 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.259 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.259 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.259 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.260 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.260 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.260 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.260 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.260 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.260 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.260 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.260 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.261 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.261 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.261 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.261 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.261 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.261 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.261 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.262 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.262 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.262 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.262 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.262 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.262 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.262 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.263 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.263 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.263 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.263 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.263 224514 DEBUG oslo_service.service [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.264 224514 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.284 224514 INFO nova.virt.node [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Determined node identity d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from /var/lib/nova/compute_id#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.284 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.284 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.285 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.285 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 1 04:23:43 localhost systemd[1]: Started libvirt QEMU daemon. Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.355 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.359 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.360 224514 INFO nova.virt.libvirt.driver [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Connection event '1' reason 'None'#033[00m Feb 1 04:23:43 localhost nova_compute[224510]: 2026-02-01 09:23:43.372 224514 DEBUG nova.virt.libvirt.volume.mount [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 1 04:23:43 localhost python3.9[224795]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.286 224514 INFO nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Libvirt host capabilities Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: b72fb799-3472-4728-b6e2-ec98d2bbb61b Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: x86_64 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v4 Feb 1 04:23:44 localhost nova_compute[224510]: AMD Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: tcp Feb 1 04:23:44 localhost nova_compute[224510]: rdma Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 16116604 Feb 1 04:23:44 localhost nova_compute[224510]: 4029151 Feb 1 04:23:44 localhost nova_compute[224510]: 0 Feb 1 04:23:44 localhost nova_compute[224510]: 0 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: selinux Feb 1 04:23:44 localhost nova_compute[224510]: 0 Feb 1 04:23:44 localhost nova_compute[224510]: system_u:system_r:svirt_t:s0 Feb 1 04:23:44 localhost nova_compute[224510]: system_u:system_r:svirt_tcg_t:s0 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: dac Feb 1 04:23:44 localhost nova_compute[224510]: 0 Feb 1 04:23:44 localhost nova_compute[224510]: +107:+107 Feb 1 04:23:44 localhost nova_compute[224510]: +107:+107 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: hvm Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 32 Feb 1 04:23:44 localhost nova_compute[224510]: /usr/libexec/qemu-kvm Feb 1 04:23:44 localhost nova_compute[224510]: pc-i440fx-rhel7.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.8.0 Feb 1 04:23:44 localhost nova_compute[224510]: q35 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.4.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.5.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.3.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel7.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.4.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.2.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.2.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.0.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.0.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.1.0 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: hvm Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 64 Feb 1 04:23:44 localhost nova_compute[224510]: /usr/libexec/qemu-kvm Feb 1 04:23:44 localhost nova_compute[224510]: pc-i440fx-rhel7.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.8.0 Feb 1 04:23:44 localhost nova_compute[224510]: q35 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.4.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.5.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.3.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel7.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.4.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.2.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.2.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.0.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.0.0 Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel8.1.0 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: #033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.296 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.317 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: /usr/libexec/qemu-kvm Feb 1 04:23:44 localhost nova_compute[224510]: kvm Feb 1 04:23:44 localhost nova_compute[224510]: pc-i440fx-rhel7.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: i686 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: rom Feb 1 04:23:44 localhost nova_compute[224510]: pflash Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: yes Feb 1 04:23:44 localhost nova_compute[224510]: no Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: no Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome Feb 1 04:23:44 localhost nova_compute[224510]: AMD Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 486 Feb 1 04:23:44 localhost nova_compute[224510]: 486-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ClearwaterForest Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ClearwaterForest-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Conroe Feb 1 04:23:44 localhost nova_compute[224510]: Conroe-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-IBPB Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v4 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v5 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Turin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Turin-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v1 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v2 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v6 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v7 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: KnightsMill Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: KnightsMill-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G1-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G2 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G2-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G3 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G3-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G4-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G5-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Penryn Feb 1 04:23:44 localhost nova_compute[224510]: Penryn-v1 Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Westmere Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-v2 Feb 1 04:23:44 localhost nova_compute[224510]: athlon Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: athlon-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: core2duo Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: core2duo-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: coreduo Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: coreduo-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: kvm32 Feb 1 04:23:44 localhost nova_compute[224510]: kvm32-v1 Feb 1 04:23:44 localhost nova_compute[224510]: kvm64 Feb 1 04:23:44 localhost nova_compute[224510]: kvm64-v1 Feb 1 04:23:44 localhost nova_compute[224510]: n270 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: n270-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: pentium Feb 1 04:23:44 localhost nova_compute[224510]: pentium-v1 Feb 1 04:23:44 localhost nova_compute[224510]: pentium2 Feb 1 04:23:44 localhost nova_compute[224510]: pentium2-v1 Feb 1 04:23:44 localhost nova_compute[224510]: pentium3 Feb 1 04:23:44 localhost nova_compute[224510]: pentium3-v1 Feb 1 04:23:44 localhost nova_compute[224510]: phenom Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: phenom-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: qemu32 Feb 1 04:23:44 localhost nova_compute[224510]: qemu32-v1 Feb 1 04:23:44 localhost nova_compute[224510]: qemu64 Feb 1 04:23:44 localhost nova_compute[224510]: qemu64-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: file Feb 1 04:23:44 localhost nova_compute[224510]: anonymous Feb 1 04:23:44 localhost nova_compute[224510]: memfd Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: disk Feb 1 04:23:44 localhost nova_compute[224510]: cdrom Feb 1 04:23:44 localhost nova_compute[224510]: floppy Feb 1 04:23:44 localhost nova_compute[224510]: lun Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ide Feb 1 04:23:44 localhost nova_compute[224510]: fdc Feb 1 04:23:44 localhost nova_compute[224510]: scsi Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: sata Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: virtio-transitional Feb 1 04:23:44 localhost nova_compute[224510]: virtio-non-transitional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: vnc Feb 1 04:23:44 localhost nova_compute[224510]: egl-headless Feb 1 04:23:44 localhost nova_compute[224510]: dbus Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: subsystem Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: default Feb 1 04:23:44 localhost nova_compute[224510]: mandatory Feb 1 04:23:44 localhost nova_compute[224510]: requisite Feb 1 04:23:44 localhost nova_compute[224510]: optional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: pci Feb 1 04:23:44 localhost nova_compute[224510]: scsi Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: virtio-transitional Feb 1 04:23:44 localhost nova_compute[224510]: virtio-non-transitional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: random Feb 1 04:23:44 localhost nova_compute[224510]: egd Feb 1 04:23:44 localhost nova_compute[224510]: builtin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: path Feb 1 04:23:44 localhost nova_compute[224510]: handle Feb 1 04:23:44 localhost nova_compute[224510]: virtiofs Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: tpm-tis Feb 1 04:23:44 localhost nova_compute[224510]: tpm-crb Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: emulator Feb 1 04:23:44 localhost nova_compute[224510]: external Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 2.0 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: pty Feb 1 04:23:44 localhost nova_compute[224510]: unix Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: qemu Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: builtin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: default Feb 1 04:23:44 localhost nova_compute[224510]: passt Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: isa Feb 1 04:23:44 localhost nova_compute[224510]: hyperv Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: null Feb 1 04:23:44 localhost nova_compute[224510]: vc Feb 1 04:23:44 localhost nova_compute[224510]: pty Feb 1 04:23:44 localhost nova_compute[224510]: dev Feb 1 04:23:44 localhost nova_compute[224510]: file Feb 1 04:23:44 localhost nova_compute[224510]: pipe Feb 1 04:23:44 localhost nova_compute[224510]: stdio Feb 1 04:23:44 localhost nova_compute[224510]: udp Feb 1 04:23:44 localhost nova_compute[224510]: tcp Feb 1 04:23:44 localhost nova_compute[224510]: unix Feb 1 04:23:44 localhost nova_compute[224510]: qemu-vdagent Feb 1 04:23:44 localhost nova_compute[224510]: dbus Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: relaxed Feb 1 04:23:44 localhost nova_compute[224510]: vapic Feb 1 04:23:44 localhost nova_compute[224510]: spinlocks Feb 1 04:23:44 localhost nova_compute[224510]: vpindex Feb 1 04:23:44 localhost nova_compute[224510]: runtime Feb 1 04:23:44 localhost nova_compute[224510]: synic Feb 1 04:23:44 localhost nova_compute[224510]: stimer Feb 1 04:23:44 localhost nova_compute[224510]: reset Feb 1 04:23:44 localhost nova_compute[224510]: vendor_id Feb 1 04:23:44 localhost nova_compute[224510]: frequencies Feb 1 04:23:44 localhost nova_compute[224510]: reenlightenment Feb 1 04:23:44 localhost nova_compute[224510]: tlbflush Feb 1 04:23:44 localhost nova_compute[224510]: ipi Feb 1 04:23:44 localhost nova_compute[224510]: avic Feb 1 04:23:44 localhost nova_compute[224510]: emsr_bitmap Feb 1 04:23:44 localhost nova_compute[224510]: xmm_input Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 4095 Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Linux KVM Hv Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.327 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: /usr/libexec/qemu-kvm Feb 1 04:23:44 localhost nova_compute[224510]: kvm Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.8.0 Feb 1 04:23:44 localhost nova_compute[224510]: i686 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: rom Feb 1 04:23:44 localhost nova_compute[224510]: pflash Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: yes Feb 1 04:23:44 localhost nova_compute[224510]: no Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: no Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome Feb 1 04:23:44 localhost nova_compute[224510]: AMD Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 486 Feb 1 04:23:44 localhost nova_compute[224510]: 486-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ClearwaterForest Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ClearwaterForest-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Conroe Feb 1 04:23:44 localhost nova_compute[224510]: Conroe-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-IBPB Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v4 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v5 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Turin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Turin-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v1 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v2 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v6 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v7 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: KnightsMill Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: KnightsMill-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G1-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G2 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G2-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G3 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G3-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G4-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G5-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Penryn Feb 1 04:23:44 localhost nova_compute[224510]: Penryn-v1 Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Westmere Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-v2 Feb 1 04:23:44 localhost nova_compute[224510]: athlon Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: athlon-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: core2duo Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: core2duo-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: coreduo Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: coreduo-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: kvm32 Feb 1 04:23:44 localhost nova_compute[224510]: kvm32-v1 Feb 1 04:23:44 localhost nova_compute[224510]: kvm64 Feb 1 04:23:44 localhost nova_compute[224510]: kvm64-v1 Feb 1 04:23:44 localhost nova_compute[224510]: n270 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: n270-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: pentium Feb 1 04:23:44 localhost nova_compute[224510]: pentium-v1 Feb 1 04:23:44 localhost nova_compute[224510]: pentium2 Feb 1 04:23:44 localhost nova_compute[224510]: pentium2-v1 Feb 1 04:23:44 localhost nova_compute[224510]: pentium3 Feb 1 04:23:44 localhost nova_compute[224510]: pentium3-v1 Feb 1 04:23:44 localhost nova_compute[224510]: phenom Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: phenom-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: qemu32 Feb 1 04:23:44 localhost nova_compute[224510]: qemu32-v1 Feb 1 04:23:44 localhost nova_compute[224510]: qemu64 Feb 1 04:23:44 localhost nova_compute[224510]: qemu64-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: file Feb 1 04:23:44 localhost nova_compute[224510]: anonymous Feb 1 04:23:44 localhost nova_compute[224510]: memfd Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: disk Feb 1 04:23:44 localhost nova_compute[224510]: cdrom Feb 1 04:23:44 localhost nova_compute[224510]: floppy Feb 1 04:23:44 localhost nova_compute[224510]: lun Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: fdc Feb 1 04:23:44 localhost nova_compute[224510]: scsi Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: sata Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: virtio-transitional Feb 1 04:23:44 localhost nova_compute[224510]: virtio-non-transitional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: vnc Feb 1 04:23:44 localhost nova_compute[224510]: egl-headless Feb 1 04:23:44 localhost nova_compute[224510]: dbus Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: subsystem Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: default Feb 1 04:23:44 localhost nova_compute[224510]: mandatory Feb 1 04:23:44 localhost nova_compute[224510]: requisite Feb 1 04:23:44 localhost nova_compute[224510]: optional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: pci Feb 1 04:23:44 localhost nova_compute[224510]: scsi Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: virtio-transitional Feb 1 04:23:44 localhost nova_compute[224510]: virtio-non-transitional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: random Feb 1 04:23:44 localhost nova_compute[224510]: egd Feb 1 04:23:44 localhost nova_compute[224510]: builtin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: path Feb 1 04:23:44 localhost nova_compute[224510]: handle Feb 1 04:23:44 localhost nova_compute[224510]: virtiofs Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: tpm-tis Feb 1 04:23:44 localhost nova_compute[224510]: tpm-crb Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: emulator Feb 1 04:23:44 localhost nova_compute[224510]: external Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 2.0 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: pty Feb 1 04:23:44 localhost nova_compute[224510]: unix Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: qemu Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: builtin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: default Feb 1 04:23:44 localhost nova_compute[224510]: passt Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: isa Feb 1 04:23:44 localhost nova_compute[224510]: hyperv Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: null Feb 1 04:23:44 localhost nova_compute[224510]: vc Feb 1 04:23:44 localhost nova_compute[224510]: pty Feb 1 04:23:44 localhost nova_compute[224510]: dev Feb 1 04:23:44 localhost nova_compute[224510]: file Feb 1 04:23:44 localhost nova_compute[224510]: pipe Feb 1 04:23:44 localhost nova_compute[224510]: stdio Feb 1 04:23:44 localhost nova_compute[224510]: udp Feb 1 04:23:44 localhost nova_compute[224510]: tcp Feb 1 04:23:44 localhost nova_compute[224510]: unix Feb 1 04:23:44 localhost nova_compute[224510]: qemu-vdagent Feb 1 04:23:44 localhost nova_compute[224510]: dbus Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: relaxed Feb 1 04:23:44 localhost nova_compute[224510]: vapic Feb 1 04:23:44 localhost nova_compute[224510]: spinlocks Feb 1 04:23:44 localhost nova_compute[224510]: vpindex Feb 1 04:23:44 localhost nova_compute[224510]: runtime Feb 1 04:23:44 localhost nova_compute[224510]: synic Feb 1 04:23:44 localhost nova_compute[224510]: stimer Feb 1 04:23:44 localhost nova_compute[224510]: reset Feb 1 04:23:44 localhost nova_compute[224510]: vendor_id Feb 1 04:23:44 localhost nova_compute[224510]: frequencies Feb 1 04:23:44 localhost nova_compute[224510]: reenlightenment Feb 1 04:23:44 localhost nova_compute[224510]: tlbflush Feb 1 04:23:44 localhost nova_compute[224510]: ipi Feb 1 04:23:44 localhost nova_compute[224510]: avic Feb 1 04:23:44 localhost nova_compute[224510]: emsr_bitmap Feb 1 04:23:44 localhost nova_compute[224510]: xmm_input Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 4095 Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Linux KVM Hv Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.399 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.405 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: /usr/libexec/qemu-kvm Feb 1 04:23:44 localhost nova_compute[224510]: kvm Feb 1 04:23:44 localhost nova_compute[224510]: pc-i440fx-rhel7.6.0 Feb 1 04:23:44 localhost nova_compute[224510]: x86_64 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: rom Feb 1 04:23:44 localhost nova_compute[224510]: pflash Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: yes Feb 1 04:23:44 localhost nova_compute[224510]: no Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: no Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome Feb 1 04:23:44 localhost nova_compute[224510]: AMD Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 486 Feb 1 04:23:44 localhost nova_compute[224510]: 486-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ClearwaterForest Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ClearwaterForest-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Conroe Feb 1 04:23:44 localhost nova_compute[224510]: Conroe-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-IBPB Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v4 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v5 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Turin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Turin-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v1 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v2 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v6 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v7 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: KnightsMill Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: KnightsMill-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G1-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G2 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G2-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G3 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G3-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G4-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G5-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Penryn Feb 1 04:23:44 localhost nova_compute[224510]: Penryn-v1 Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Westmere Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-v2 Feb 1 04:23:44 localhost nova_compute[224510]: athlon Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: athlon-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: core2duo Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: core2duo-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: coreduo Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: coreduo-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: kvm32 Feb 1 04:23:44 localhost nova_compute[224510]: kvm32-v1 Feb 1 04:23:44 localhost nova_compute[224510]: kvm64 Feb 1 04:23:44 localhost nova_compute[224510]: kvm64-v1 Feb 1 04:23:44 localhost nova_compute[224510]: n270 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: n270-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: pentium Feb 1 04:23:44 localhost nova_compute[224510]: pentium-v1 Feb 1 04:23:44 localhost nova_compute[224510]: pentium2 Feb 1 04:23:44 localhost nova_compute[224510]: pentium2-v1 Feb 1 04:23:44 localhost nova_compute[224510]: pentium3 Feb 1 04:23:44 localhost nova_compute[224510]: pentium3-v1 Feb 1 04:23:44 localhost nova_compute[224510]: phenom Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: phenom-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: qemu32 Feb 1 04:23:44 localhost nova_compute[224510]: qemu32-v1 Feb 1 04:23:44 localhost nova_compute[224510]: qemu64 Feb 1 04:23:44 localhost nova_compute[224510]: qemu64-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: file Feb 1 04:23:44 localhost nova_compute[224510]: anonymous Feb 1 04:23:44 localhost nova_compute[224510]: memfd Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: disk Feb 1 04:23:44 localhost nova_compute[224510]: cdrom Feb 1 04:23:44 localhost nova_compute[224510]: floppy Feb 1 04:23:44 localhost nova_compute[224510]: lun Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ide Feb 1 04:23:44 localhost nova_compute[224510]: fdc Feb 1 04:23:44 localhost nova_compute[224510]: scsi Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: sata Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: virtio-transitional Feb 1 04:23:44 localhost nova_compute[224510]: virtio-non-transitional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: vnc Feb 1 04:23:44 localhost nova_compute[224510]: egl-headless Feb 1 04:23:44 localhost nova_compute[224510]: dbus Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: subsystem Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: default Feb 1 04:23:44 localhost nova_compute[224510]: mandatory Feb 1 04:23:44 localhost nova_compute[224510]: requisite Feb 1 04:23:44 localhost nova_compute[224510]: optional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: pci Feb 1 04:23:44 localhost nova_compute[224510]: scsi Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: virtio-transitional Feb 1 04:23:44 localhost nova_compute[224510]: virtio-non-transitional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: random Feb 1 04:23:44 localhost nova_compute[224510]: egd Feb 1 04:23:44 localhost nova_compute[224510]: builtin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: path Feb 1 04:23:44 localhost nova_compute[224510]: handle Feb 1 04:23:44 localhost nova_compute[224510]: virtiofs Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: tpm-tis Feb 1 04:23:44 localhost nova_compute[224510]: tpm-crb Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: emulator Feb 1 04:23:44 localhost nova_compute[224510]: external Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 2.0 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: pty Feb 1 04:23:44 localhost nova_compute[224510]: unix Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: qemu Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: builtin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: default Feb 1 04:23:44 localhost nova_compute[224510]: passt Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: isa Feb 1 04:23:44 localhost nova_compute[224510]: hyperv Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: null Feb 1 04:23:44 localhost nova_compute[224510]: vc Feb 1 04:23:44 localhost nova_compute[224510]: pty Feb 1 04:23:44 localhost nova_compute[224510]: dev Feb 1 04:23:44 localhost nova_compute[224510]: file Feb 1 04:23:44 localhost nova_compute[224510]: pipe Feb 1 04:23:44 localhost nova_compute[224510]: stdio Feb 1 04:23:44 localhost nova_compute[224510]: udp Feb 1 04:23:44 localhost nova_compute[224510]: tcp Feb 1 04:23:44 localhost nova_compute[224510]: unix Feb 1 04:23:44 localhost nova_compute[224510]: qemu-vdagent Feb 1 04:23:44 localhost nova_compute[224510]: dbus Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: relaxed Feb 1 04:23:44 localhost nova_compute[224510]: vapic Feb 1 04:23:44 localhost nova_compute[224510]: spinlocks Feb 1 04:23:44 localhost nova_compute[224510]: vpindex Feb 1 04:23:44 localhost nova_compute[224510]: runtime Feb 1 04:23:44 localhost nova_compute[224510]: synic Feb 1 04:23:44 localhost nova_compute[224510]: stimer Feb 1 04:23:44 localhost nova_compute[224510]: reset Feb 1 04:23:44 localhost nova_compute[224510]: vendor_id Feb 1 04:23:44 localhost nova_compute[224510]: frequencies Feb 1 04:23:44 localhost nova_compute[224510]: reenlightenment Feb 1 04:23:44 localhost nova_compute[224510]: tlbflush Feb 1 04:23:44 localhost nova_compute[224510]: ipi Feb 1 04:23:44 localhost nova_compute[224510]: avic Feb 1 04:23:44 localhost nova_compute[224510]: emsr_bitmap Feb 1 04:23:44 localhost nova_compute[224510]: xmm_input Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 4095 Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Linux KVM Hv Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.463 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: /usr/libexec/qemu-kvm Feb 1 04:23:44 localhost nova_compute[224510]: kvm Feb 1 04:23:44 localhost nova_compute[224510]: pc-q35-rhel9.8.0 Feb 1 04:23:44 localhost nova_compute[224510]: x86_64 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: efi Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 1 04:23:44 localhost nova_compute[224510]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 1 04:23:44 localhost nova_compute[224510]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 1 04:23:44 localhost nova_compute[224510]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: rom Feb 1 04:23:44 localhost nova_compute[224510]: pflash Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: yes Feb 1 04:23:44 localhost nova_compute[224510]: no Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: yes Feb 1 04:23:44 localhost nova_compute[224510]: no Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome Feb 1 04:23:44 localhost nova_compute[224510]: AMD Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 486 Feb 1 04:23:44 localhost nova_compute[224510]: 486-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Broadwell-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cascadelake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ClearwaterForest Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: ClearwaterForest-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Conroe Feb 1 04:23:44 localhost nova_compute[224510]: Conroe-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Cooperlake-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Denverton-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Dhyana-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Genoa-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-IBPB Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Milan-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v4 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Rome-v5 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Turin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-Turin-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v1 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v2 Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: EPYC-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: GraniteRapids-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Haswell-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-noTSX Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v6 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Icelake-Server-v7 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: IvyBridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: KnightsMill Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: KnightsMill-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Nehalem-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G1-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G2 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G2-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G3 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G3-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G4-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Opteron_G5-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Penryn Feb 1 04:23:44 localhost nova_compute[224510]: Penryn-v1 Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: SandyBridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SapphireRapids-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: SierraForest-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Client-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-noTSX-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Skylake-Server-v5 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v2 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v3 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Snowridge-v4 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Westmere Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-IBRS Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Westmere-v2 Feb 1 04:23:44 localhost nova_compute[224510]: athlon Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: athlon-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: core2duo Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: core2duo-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: coreduo Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: coreduo-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: kvm32 Feb 1 04:23:44 localhost nova_compute[224510]: kvm32-v1 Feb 1 04:23:44 localhost nova_compute[224510]: kvm64 Feb 1 04:23:44 localhost nova_compute[224510]: kvm64-v1 Feb 1 04:23:44 localhost nova_compute[224510]: n270 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: n270-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: pentium Feb 1 04:23:44 localhost nova_compute[224510]: pentium-v1 Feb 1 04:23:44 localhost nova_compute[224510]: pentium2 Feb 1 04:23:44 localhost nova_compute[224510]: pentium2-v1 Feb 1 04:23:44 localhost nova_compute[224510]: pentium3 Feb 1 04:23:44 localhost nova_compute[224510]: pentium3-v1 Feb 1 04:23:44 localhost nova_compute[224510]: phenom Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: phenom-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: qemu32 Feb 1 04:23:44 localhost nova_compute[224510]: qemu32-v1 Feb 1 04:23:44 localhost nova_compute[224510]: qemu64 Feb 1 04:23:44 localhost nova_compute[224510]: qemu64-v1 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: file Feb 1 04:23:44 localhost nova_compute[224510]: anonymous Feb 1 04:23:44 localhost nova_compute[224510]: memfd Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: disk Feb 1 04:23:44 localhost nova_compute[224510]: cdrom Feb 1 04:23:44 localhost nova_compute[224510]: floppy Feb 1 04:23:44 localhost nova_compute[224510]: lun Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: fdc Feb 1 04:23:44 localhost nova_compute[224510]: scsi Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: sata Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: virtio-transitional Feb 1 04:23:44 localhost nova_compute[224510]: virtio-non-transitional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: vnc Feb 1 04:23:44 localhost nova_compute[224510]: egl-headless Feb 1 04:23:44 localhost nova_compute[224510]: dbus Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: subsystem Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: default Feb 1 04:23:44 localhost nova_compute[224510]: mandatory Feb 1 04:23:44 localhost nova_compute[224510]: requisite Feb 1 04:23:44 localhost nova_compute[224510]: optional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: pci Feb 1 04:23:44 localhost nova_compute[224510]: scsi Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: virtio Feb 1 04:23:44 localhost nova_compute[224510]: virtio-transitional Feb 1 04:23:44 localhost nova_compute[224510]: virtio-non-transitional Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: random Feb 1 04:23:44 localhost nova_compute[224510]: egd Feb 1 04:23:44 localhost nova_compute[224510]: builtin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: path Feb 1 04:23:44 localhost nova_compute[224510]: handle Feb 1 04:23:44 localhost nova_compute[224510]: virtiofs Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: tpm-tis Feb 1 04:23:44 localhost nova_compute[224510]: tpm-crb Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: emulator Feb 1 04:23:44 localhost nova_compute[224510]: external Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 2.0 Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: usb Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: pty Feb 1 04:23:44 localhost nova_compute[224510]: unix Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: qemu Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: builtin Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: default Feb 1 04:23:44 localhost nova_compute[224510]: passt Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: isa Feb 1 04:23:44 localhost nova_compute[224510]: hyperv Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: null Feb 1 04:23:44 localhost nova_compute[224510]: vc Feb 1 04:23:44 localhost nova_compute[224510]: pty Feb 1 04:23:44 localhost nova_compute[224510]: dev Feb 1 04:23:44 localhost nova_compute[224510]: file Feb 1 04:23:44 localhost nova_compute[224510]: pipe Feb 1 04:23:44 localhost nova_compute[224510]: stdio Feb 1 04:23:44 localhost nova_compute[224510]: udp Feb 1 04:23:44 localhost nova_compute[224510]: tcp Feb 1 04:23:44 localhost nova_compute[224510]: unix Feb 1 04:23:44 localhost nova_compute[224510]: qemu-vdagent Feb 1 04:23:44 localhost nova_compute[224510]: dbus Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: relaxed Feb 1 04:23:44 localhost nova_compute[224510]: vapic Feb 1 04:23:44 localhost nova_compute[224510]: spinlocks Feb 1 04:23:44 localhost nova_compute[224510]: vpindex Feb 1 04:23:44 localhost nova_compute[224510]: runtime Feb 1 04:23:44 localhost nova_compute[224510]: synic Feb 1 04:23:44 localhost nova_compute[224510]: stimer Feb 1 04:23:44 localhost nova_compute[224510]: reset Feb 1 04:23:44 localhost nova_compute[224510]: vendor_id Feb 1 04:23:44 localhost nova_compute[224510]: frequencies Feb 1 04:23:44 localhost nova_compute[224510]: reenlightenment Feb 1 04:23:44 localhost nova_compute[224510]: tlbflush Feb 1 04:23:44 localhost nova_compute[224510]: ipi Feb 1 04:23:44 localhost nova_compute[224510]: avic Feb 1 04:23:44 localhost nova_compute[224510]: emsr_bitmap Feb 1 04:23:44 localhost nova_compute[224510]: xmm_input Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: 4095 Feb 1 04:23:44 localhost nova_compute[224510]: on Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: off Feb 1 04:23:44 localhost nova_compute[224510]: Linux KVM Hv Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: Feb 1 04:23:44 localhost nova_compute[224510]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.514 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.515 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.517 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.518 224514 INFO nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Secure Boot support detected#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.520 224514 INFO nova.virt.libvirt.driver [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.520 224514 INFO nova.virt.libvirt.driver [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.531 224514 DEBUG nova.virt.libvirt.driver [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.575 224514 INFO nova.virt.node [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Determined node identity d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from /var/lib/nova/compute_id#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.595 224514 DEBUG nova.compute.manager [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Verified node d5eeed9a-e4d0-4244-8d4e-39e5c8263590 matches my host np0005604215.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Feb 1 04:23:44 localhost nova_compute[224510]: 2026-02-01 09:23:44.668 224514 INFO nova.compute.manager [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.101 224514 INFO nova.service [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Updating service version for nova-compute on np0005604215.localdomain from 57 to 66#033[00m Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.144 224514 DEBUG oslo_concurrency.lockutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.145 224514 DEBUG oslo_concurrency.lockutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.145 224514 DEBUG oslo_concurrency.lockutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.145 224514 DEBUG nova.compute.resource_tracker [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.145 224514 DEBUG oslo_concurrency.processutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:23:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18680 DF PROTO=TCP SPT=42360 DPT=9882 SEQ=830503638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65D310E0000000001030307) Feb 1 04:23:45 localhost python3.9[224915]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.594 224514 DEBUG oslo_concurrency.processutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:23:45 localhost systemd[1]: Started libvirt nodedev daemon. Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.950 224514 WARNING nova.virt.libvirt.driver [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.952 224514 DEBUG nova.compute.resource_tracker [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=13613MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.953 224514 DEBUG oslo_concurrency.lockutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:23:45 localhost nova_compute[224510]: 2026-02-01 09:23:45.953 224514 DEBUG oslo_concurrency.lockutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.093 224514 DEBUG nova.compute.resource_tracker [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.093 224514 DEBUG nova.compute.resource_tracker [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:23:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.146 224514 DEBUG nova.scheduler.client.report [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.170 224514 DEBUG nova.scheduler.client.report [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.170 224514 DEBUG nova.compute.provider_tree [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.184 224514 DEBUG nova.scheduler.client.report [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.205 224514 DEBUG nova.scheduler.client.report [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: HW_CPU_X86_BMI,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_LAN9118,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.225 224514 DEBUG oslo_concurrency.processutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:23:46 localhost podman[225071]: 2026-02-01 09:23:46.227453034 +0000 UTC m=+0.086506627 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:23:46 localhost podman[225071]: 2026-02-01 09:23:46.295973863 +0000 UTC m=+0.155027436 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:23:46 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:23:46 localhost python3.9[225070]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 1 04:23:46 localhost systemd-journald[47940]: Field hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 122.2 (407 of 333 items), suggesting rotation. Feb 1 04:23:46 localhost systemd-journald[47940]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 04:23:46 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:23:46 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.642 224514 DEBUG oslo_concurrency.processutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.649 224514 DEBUG nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Feb 1 04:23:46 localhost nova_compute[224510]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.649 224514 INFO nova.virt.libvirt.host [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] kernel doesn't support AMD SEV#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.651 224514 DEBUG nova.compute.provider_tree [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.651 224514 DEBUG nova.virt.libvirt.driver [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.681 224514 DEBUG nova.scheduler.client.report [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.775 224514 DEBUG nova.compute.provider_tree [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Updating resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.819 224514 DEBUG nova.compute.resource_tracker [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.820 224514 DEBUG oslo_concurrency.lockutils [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.820 224514 DEBUG nova.service [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.912 224514 DEBUG nova.service [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Feb 1 04:23:46 localhost nova_compute[224510]: 2026-02-01 09:23:46.912 224514 DEBUG nova.servicegroup.drivers.db [None req-4842977a-3d23-43c6-a834-f3e97ae6307a - - - - - -] DB_Driver: join new ServiceGroup member np0005604215.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Feb 1 04:23:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48806 DF PROTO=TCP SPT=55492 DPT=9102 SEQ=3823228251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65D3F0D0000000001030307) Feb 1 04:23:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:23:49 localhost systemd[1]: tmp-crun.xu90Tr.mount: Deactivated successfully. Feb 1 04:23:49 localhost podman[225252]: 2026-02-01 09:23:49.691328964 +0000 UTC m=+0.069747607 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Feb 1 04:23:49 localhost podman[225252]: 2026-02-01 09:23:49.719867073 +0000 UTC m=+0.098285726 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:23:49 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:23:49 localhost python3.9[225251]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:23:49 localhost systemd[1]: Stopping nova_compute container... Feb 1 04:23:50 localhost systemd[1]: tmp-crun.CJvFoo.mount: Deactivated successfully. Feb 1 04:23:51 localhost nova_compute[224510]: 2026-02-01 09:23:51.030 224514 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Feb 1 04:23:51 localhost nova_compute[224510]: 2026-02-01 09:23:51.032 224514 DEBUG oslo_concurrency.lockutils [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:23:51 localhost nova_compute[224510]: 2026-02-01 09:23:51.033 224514 DEBUG oslo_concurrency.lockutils [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:23:51 localhost nova_compute[224510]: 2026-02-01 09:23:51.033 224514 DEBUG oslo_concurrency.lockutils [None req-8068517e-d2d4-4816-a849-5dd48d7eee84 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:23:51 localhost systemd[1]: libpod-6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b.scope: Deactivated successfully. Feb 1 04:23:51 localhost journal[224673]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, ) Feb 1 04:23:51 localhost journal[224673]: hostname: np0005604215.localdomain Feb 1 04:23:51 localhost journal[224673]: End of file while reading data: Input/output error Feb 1 04:23:51 localhost systemd[1]: libpod-6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b.scope: Consumed 3.803s CPU time. Feb 1 04:23:51 localhost podman[225273]: 2026-02-01 09:23:51.417459736 +0000 UTC m=+1.442966633 container died 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:23:51 localhost systemd[1]: tmp-crun.Zlc5wd.mount: Deactivated successfully. Feb 1 04:23:51 localhost systemd[1]: var-lib-containers-storage-overlay-54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538-merged.mount: Deactivated successfully. Feb 1 04:23:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b-userdata-shm.mount: Deactivated successfully. Feb 1 04:23:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63624 DF PROTO=TCP SPT=52826 DPT=9100 SEQ=1783760803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65D4B0D0000000001030307) Feb 1 04:23:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63468 DF PROTO=TCP SPT=52822 DPT=9101 SEQ=2959385660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA65D590D0000000001030307) Feb 1 04:23:55 localhost podman[225273]: 2026-02-01 09:23:55.736053746 +0000 UTC m=+5.761560593 container cleanup 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20260127, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:23:55 localhost podman[225273]: nova_compute Feb 1 04:23:55 localhost podman[225567]: error opening file `/run/crun/6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b/status`: No such file or directory Feb 1 04:23:55 localhost podman[225556]: 2026-02-01 09:23:55.827656669 +0000 UTC m=+0.063829277 container cleanup 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Feb 1 04:23:55 localhost podman[225556]: nova_compute Feb 1 04:23:55 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Feb 1 04:23:55 localhost systemd[1]: Stopped nova_compute container. Feb 1 04:23:55 localhost systemd[1]: Starting nova_compute container... Feb 1 04:23:55 localhost systemd[1]: Started libcrun container. Feb 1 04:23:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:55 localhost podman[225571]: 2026-02-01 09:23:55.978309431 +0000 UTC m=+0.121401131 container init 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:23:55 localhost podman[225571]: 2026-02-01 09:23:55.987396417 +0000 UTC m=+0.130488107 container start 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:23:55 localhost podman[225571]: nova_compute Feb 1 04:23:55 localhost nova_compute[225585]: + sudo -E kolla_set_configs Feb 1 04:23:55 localhost systemd[1]: Started nova_compute container. Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Validating config file Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying service configuration files Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /etc/ceph Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Creating directory /etc/ceph Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/ceph Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Writing out command to execute Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:23:56 localhost nova_compute[225585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 1 04:23:56 localhost nova_compute[225585]: ++ cat /run_command Feb 1 04:23:56 localhost nova_compute[225585]: + CMD=nova-compute Feb 1 04:23:56 localhost nova_compute[225585]: + ARGS= Feb 1 04:23:56 localhost nova_compute[225585]: + sudo kolla_copy_cacerts Feb 1 04:23:56 localhost nova_compute[225585]: + [[ ! -n '' ]] Feb 1 04:23:56 localhost nova_compute[225585]: + . kolla_extend_start Feb 1 04:23:56 localhost nova_compute[225585]: Running command: 'nova-compute' Feb 1 04:23:56 localhost nova_compute[225585]: + echo 'Running command: '\''nova-compute'\''' Feb 1 04:23:56 localhost nova_compute[225585]: + umask 0022 Feb 1 04:23:56 localhost nova_compute[225585]: + exec nova-compute Feb 1 04:23:57 localhost python3.9[225707]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 1 04:23:57 localhost systemd[1]: Started libpod-conmon-01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d.scope. Feb 1 04:23:57 localhost systemd[1]: Started libcrun container. Feb 1 04:23:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02df9e1e28dd9ec9663cf49666c784b8876dc545b7721fca6e88de98c1c0890/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02df9e1e28dd9ec9663cf49666c784b8876dc545b7721fca6e88de98c1c0890/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02df9e1e28dd9ec9663cf49666c784b8876dc545b7721fca6e88de98c1c0890/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 04:23:57 localhost podman[225733]: 2026-02-01 09:23:57.596601387 +0000 UTC m=+0.136971856 container init 01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:23:57 localhost podman[225733]: 2026-02-01 09:23:57.613671027 +0000 UTC m=+0.154041496 container start 01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3) Feb 1 04:23:57 localhost python3.9[225707]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Applying nova statedir ownership Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/fc52238ffcbdcb325c6bf3fe6412477fc4bdb6cd9151f39289b74f25e08e0db9 Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/d301d14069645d8c23fee2987984776b3e88a570e1aa96d6cf3e31fa880385fd Feb 1 04:23:57 localhost nova_compute_init[225755]: INFO:nova_statedir:Nova statedir ownership complete Feb 1 04:23:57 localhost systemd[1]: libpod-01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d.scope: Deactivated successfully. Feb 1 04:23:57 localhost podman[225754]: 2026-02-01 09:23:57.685765724 +0000 UTC m=+0.056175843 container died 01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init) Feb 1 04:23:57 localhost podman[225767]: 2026-02-01 09:23:57.762659438 +0000 UTC m=+0.074499072 container cleanup 01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:23:57 localhost systemd[1]: libpod-conmon-01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d.scope: Deactivated successfully. Feb 1 04:23:57 localhost systemd[1]: tmp-crun.p8KUQI.mount: Deactivated successfully. Feb 1 04:23:57 localhost systemd[1]: var-lib-containers-storage-overlay-a02df9e1e28dd9ec9663cf49666c784b8876dc545b7721fca6e88de98c1c0890-merged.mount: Deactivated successfully. Feb 1 04:23:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d-userdata-shm.mount: Deactivated successfully. Feb 1 04:23:57 localhost nova_compute[225585]: 2026-02-01 09:23:57.844 225589 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 1 04:23:57 localhost nova_compute[225585]: 2026-02-01 09:23:57.844 225589 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 1 04:23:57 localhost nova_compute[225585]: 2026-02-01 09:23:57.844 225589 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 1 04:23:57 localhost nova_compute[225585]: 2026-02-01 09:23:57.844 225589 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 1 04:23:57 localhost nova_compute[225585]: 2026-02-01 09:23:57.963 225589 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:23:57 localhost nova_compute[225585]: 2026-02-01 09:23:57.986 225589 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:23:57 localhost nova_compute[225585]: 2026-02-01 09:23:57.986 225589 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 1 04:23:58 localhost systemd-logind[761]: Session 53 logged out. Waiting for processes to exit. Feb 1 04:23:58 localhost systemd[1]: session-53.scope: Deactivated successfully. Feb 1 04:23:58 localhost systemd[1]: session-53.scope: Consumed 1min 55.991s CPU time. Feb 1 04:23:58 localhost systemd-logind[761]: Removed session 53. Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.467 225589 INFO nova.virt.driver [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.577 225589 INFO nova.compute.provider_config [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.585 225589 WARNING nova.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.585 225589 DEBUG oslo_concurrency.lockutils [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.586 225589 DEBUG oslo_concurrency.lockutils [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.586 225589 DEBUG oslo_concurrency.lockutils [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.586 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.586 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.586 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.587 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.587 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.587 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.587 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.587 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.587 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.587 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.588 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.588 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.588 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.588 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.588 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.588 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.588 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.589 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.589 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.589 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] console_host = np0005604215.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.589 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.589 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.589 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.589 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.590 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.590 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.590 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.590 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.590 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.590 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.591 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.591 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.591 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.591 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.591 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.591 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.591 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.591 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.592 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] host = np0005604215.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.592 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.592 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.592 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.592 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.592 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.593 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.593 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.593 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.593 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.593 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.593 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.594 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.594 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.594 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.594 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.594 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.594 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.594 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.595 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.595 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.595 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.595 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.595 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.595 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.595 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.595 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.596 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.596 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.596 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.596 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.596 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.596 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.596 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.597 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.597 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.597 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.597 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.597 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.597 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.598 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.598 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.598 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.598 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.598 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.598 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.598 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.599 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.599 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.599 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.599 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.599 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.599 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.599 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.600 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.600 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.600 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.600 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.600 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.600 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.600 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.600 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.601 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.601 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.601 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.601 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.601 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.601 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.601 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.602 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.602 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.602 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.602 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.603 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.604 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.604 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.605 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.605 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.605 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.606 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.606 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.606 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.607 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.607 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.607 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.607 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.608 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.608 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.608 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.609 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.609 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.609 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.610 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.610 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.610 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.611 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.611 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.611 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.611 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.612 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.612 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.612 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.613 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.613 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.613 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.614 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.614 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.614 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.614 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.615 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.615 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.616 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.616 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.617 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.617 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.617 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.618 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.618 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.618 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.619 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.619 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.619 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.620 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.620 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.620 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.621 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.621 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.621 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.622 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.622 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.622 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.623 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.623 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.623 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.624 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.624 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.624 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.625 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.625 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.625 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.626 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.626 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.626 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.627 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.627 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.627 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.628 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.628 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.628 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.628 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.629 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.629 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.630 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.630 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.630 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.631 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.631 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.631 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.632 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.632 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.632 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.633 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.633 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.633 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.634 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.634 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.634 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.635 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.635 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.635 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.635 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.636 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.636 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.636 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.637 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.637 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.637 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.638 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.638 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.639 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.639 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.639 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.640 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.640 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.640 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.641 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.641 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.641 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.642 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.642 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.642 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.642 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.642 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.643 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.643 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.643 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.643 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.643 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.644 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.644 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.644 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.644 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.644 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.645 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.645 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.645 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.645 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.645 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.646 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.646 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.646 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.646 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.647 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.647 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.647 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.647 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.647 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.647 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.648 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.648 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.648 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.648 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.648 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.649 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.649 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.649 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.649 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.649 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.650 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.650 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.650 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.650 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.650 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.651 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.651 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.651 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.651 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.652 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.652 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.652 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.652 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.652 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.653 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.653 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.653 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.653 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.653 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.654 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.654 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.654 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.654 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.654 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.655 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.655 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.655 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.655 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.655 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.655 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.656 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.656 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.656 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.656 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.656 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.657 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.657 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.657 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.657 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.657 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.657 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.658 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.658 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.658 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.658 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.658 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.659 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.659 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.659 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.659 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.659 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.659 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.660 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.660 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.660 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.660 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.660 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.661 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.661 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.661 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.661 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.661 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.661 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.662 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.662 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.662 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.662 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.662 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.663 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.663 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.663 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.663 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.663 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.663 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.664 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.664 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.664 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.664 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.665 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.665 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.665 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.665 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.665 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.666 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.666 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.666 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.666 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.666 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.667 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.667 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.667 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.667 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.668 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.668 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.668 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.668 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.668 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.669 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.669 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.669 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.669 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.669 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.670 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.670 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.670 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.670 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.670 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.671 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.671 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.671 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.671 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.671 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.671 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.672 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.672 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.672 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.672 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.672 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.673 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.673 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.673 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.673 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.673 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.673 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.674 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.674 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.674 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.674 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.674 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.674 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.674 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.675 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.675 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.675 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.675 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.675 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.675 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.675 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.675 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.676 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.676 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.676 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.676 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.676 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.676 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.676 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.676 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.677 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.677 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.677 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.677 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.677 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.677 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.677 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.677 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.678 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.678 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.678 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.678 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.678 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.678 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.678 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.679 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.679 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.679 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.679 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.679 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.679 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.679 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.679 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.680 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.680 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.680 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.680 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.680 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.680 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.680 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.680 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.681 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.681 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.681 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.681 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.681 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.681 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.681 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.682 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.682 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.682 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.682 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.682 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.682 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.682 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.683 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.683 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.683 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.683 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.683 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.683 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.683 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.684 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.684 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.684 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.684 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.684 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.684 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.684 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.684 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.685 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.685 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.685 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.685 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.685 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.685 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.685 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.686 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.686 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.686 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.686 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.686 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.686 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.686 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.686 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.687 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.687 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.687 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.687 225589 WARNING oslo_config.cfg [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 1 04:23:58 localhost nova_compute[225585]: live_migration_uri is deprecated for removal in favor of two other options that Feb 1 04:23:58 localhost nova_compute[225585]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 1 04:23:58 localhost nova_compute[225585]: and ``live_migration_inbound_addr`` respectively. Feb 1 04:23:58 localhost nova_compute[225585]: ). Its value may be silently ignored in the future.#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.687 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.687 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.688 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.688 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.688 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.688 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.688 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.688 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.688 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.689 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.689 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.689 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.689 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.689 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.689 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.689 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.689 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.690 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.690 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rbd_secret_uuid = 33fac0b9-80c7-560f-918a-c92d3021ca1e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.690 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.690 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.690 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.690 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.690 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.691 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.691 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.691 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.691 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.691 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.691 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.691 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.692 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.692 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.693 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.693 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.693 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.693 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.693 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.693 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.693 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.694 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.694 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.694 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.694 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.694 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.694 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.694 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.695 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.695 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.695 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.695 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.695 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.695 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.695 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.696 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.696 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.696 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.696 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.696 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.696 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.696 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.696 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.697 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.697 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.697 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.697 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.697 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.697 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.697 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.698 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.698 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.698 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.698 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.698 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.698 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.698 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.698 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.699 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.699 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.699 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.699 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.699 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.699 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.699 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.699 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.700 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.700 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.700 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.700 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.700 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.700 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.700 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.701 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.701 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.701 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.701 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.701 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.701 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.701 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.701 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.702 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.702 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.702 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.702 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.702 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.702 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.702 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.702 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.703 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.703 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.703 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.703 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.703 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.703 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.703 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.704 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.704 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.704 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.704 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.704 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.704 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.704 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.704 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.705 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.705 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.705 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.705 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.705 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.705 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.705 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.706 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.706 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.706 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.706 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.706 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.706 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.706 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.706 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.707 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.707 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.707 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.707 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.707 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.707 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.707 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.708 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.708 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.708 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.708 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.708 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.708 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.708 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.709 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.709 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.709 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.709 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.709 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.709 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.709 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.710 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.710 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.710 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.710 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.710 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.710 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.710 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.710 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.711 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.711 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.711 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.711 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.711 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.711 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.711 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.711 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.712 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.712 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.712 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.712 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.712 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.712 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.712 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.713 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.713 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.713 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.713 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.713 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.713 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.713 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.714 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.714 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.714 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.714 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.714 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.714 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.714 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.714 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.715 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.715 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.715 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.715 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.715 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.715 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.715 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.716 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.716 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.716 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.716 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.716 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.716 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.716 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.717 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.717 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.717 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.717 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.717 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.717 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.717 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.717 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.718 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.718 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.718 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.718 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.718 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.718 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.718 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.719 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.719 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.719 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.719 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.719 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.719 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.719 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.719 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.720 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.720 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.720 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.720 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.720 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.720 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.720 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.720 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.721 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.721 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.721 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.721 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.721 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.721 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.722 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.722 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.722 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.722 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.722 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.722 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.722 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.722 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.723 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.723 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.723 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.723 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.723 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.723 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.723 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.724 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.724 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.724 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.724 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.724 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.724 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.724 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.724 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.725 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.725 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.725 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.725 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.725 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.725 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.725 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.726 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.726 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.726 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.726 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.726 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.726 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.726 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.726 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.727 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.727 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.727 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.727 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.727 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.727 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.727 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.728 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.728 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.728 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.728 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.728 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.728 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.728 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.728 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.729 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.729 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.729 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.729 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.729 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.729 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.729 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.730 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.730 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.730 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.730 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.730 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.730 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.730 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.730 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.731 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.731 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.731 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.731 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.731 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.731 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.731 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.732 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.732 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.732 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.732 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.732 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.732 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.732 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.732 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.733 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.733 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.733 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.733 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.733 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.733 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.733 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.734 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.734 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.734 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.734 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.734 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.734 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.734 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.734 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.735 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.735 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.735 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.735 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.735 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.735 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.735 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.735 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.736 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.736 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.736 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.736 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.736 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.736 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.736 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.736 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.737 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.737 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.737 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.737 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.737 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.737 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.737 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.738 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.738 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.738 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.738 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.738 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.738 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.738 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.738 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.739 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.739 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.739 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.739 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.739 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.739 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.739 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.739 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.740 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.740 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.740 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.740 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.740 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.740 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.740 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.741 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.741 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.741 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.741 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.741 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.741 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.741 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.741 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.742 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.742 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.742 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.742 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.742 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.742 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.742 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.743 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.743 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.743 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.743 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.743 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.743 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.743 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.743 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.744 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.744 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.744 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.744 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.744 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.744 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.744 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.744 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.745 225589 DEBUG oslo_service.service [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.745 225589 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.759 225589 INFO nova.virt.node [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Determined node identity d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from /var/lib/nova/compute_id#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.760 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.760 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.760 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.761 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.773 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.775 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.776 225589 INFO nova.virt.libvirt.driver [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Connection event '1' reason 'None'#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.783 225589 INFO nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Libvirt host capabilities Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: b72fb799-3472-4728-b6e2-ec98d2bbb61b Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: x86_64 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v4 Feb 1 04:23:58 localhost nova_compute[225585]: AMD Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: tcp Feb 1 04:23:58 localhost nova_compute[225585]: rdma Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 16116604 Feb 1 04:23:58 localhost nova_compute[225585]: 4029151 Feb 1 04:23:58 localhost nova_compute[225585]: 0 Feb 1 04:23:58 localhost nova_compute[225585]: 0 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: selinux Feb 1 04:23:58 localhost nova_compute[225585]: 0 Feb 1 04:23:58 localhost nova_compute[225585]: system_u:system_r:svirt_t:s0 Feb 1 04:23:58 localhost nova_compute[225585]: system_u:system_r:svirt_tcg_t:s0 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: dac Feb 1 04:23:58 localhost nova_compute[225585]: 0 Feb 1 04:23:58 localhost nova_compute[225585]: +107:+107 Feb 1 04:23:58 localhost nova_compute[225585]: +107:+107 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: hvm Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 32 Feb 1 04:23:58 localhost nova_compute[225585]: /usr/libexec/qemu-kvm Feb 1 04:23:58 localhost nova_compute[225585]: pc-i440fx-rhel7.6.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.8.0 Feb 1 04:23:58 localhost nova_compute[225585]: q35 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.6.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.6.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.4.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.5.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.3.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel7.6.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.4.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.2.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.2.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.0.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.0.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.1.0 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: hvm Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 64 Feb 1 04:23:58 localhost nova_compute[225585]: /usr/libexec/qemu-kvm Feb 1 04:23:58 localhost nova_compute[225585]: pc-i440fx-rhel7.6.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.8.0 Feb 1 04:23:58 localhost nova_compute[225585]: q35 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.6.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.6.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.4.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.5.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.3.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel7.6.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.4.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.2.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.2.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.0.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.0.0 Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel8.1.0 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: #033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.789 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.792 225589 DEBUG nova.virt.libvirt.volume.mount [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.794 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: /usr/libexec/qemu-kvm Feb 1 04:23:58 localhost nova_compute[225585]: kvm Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.8.0 Feb 1 04:23:58 localhost nova_compute[225585]: i686 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: rom Feb 1 04:23:58 localhost nova_compute[225585]: pflash Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: yes Feb 1 04:23:58 localhost nova_compute[225585]: no Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: no Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: on Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: on Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome Feb 1 04:23:58 localhost nova_compute[225585]: AMD Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 486 Feb 1 04:23:58 localhost nova_compute[225585]: 486-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: ClearwaterForest Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: ClearwaterForest-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Conroe Feb 1 04:23:58 localhost nova_compute[225585]: Conroe-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Cooperlake Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cooperlake-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cooperlake-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Dhyana Feb 1 04:23:58 localhost nova_compute[225585]: Dhyana-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Dhyana-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Genoa Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Genoa-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Genoa-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-IBPB Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v4 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v5 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Turin Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Turin-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v1 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v2 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v6 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v7 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: KnightsMill Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: KnightsMill-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G1-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G2 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G2-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G3 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G3-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G4-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G5-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Penryn Feb 1 04:23:58 localhost nova_compute[225585]: Penryn-v1 Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge-v1 Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge-v2 Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Westmere Feb 1 04:23:58 localhost nova_compute[225585]: Westmere-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Westmere-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Westmere-v2 Feb 1 04:23:58 localhost nova_compute[225585]: athlon Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: athlon-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: core2duo Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: core2duo-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: coreduo Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: coreduo-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: kvm32 Feb 1 04:23:58 localhost nova_compute[225585]: kvm32-v1 Feb 1 04:23:58 localhost nova_compute[225585]: kvm64 Feb 1 04:23:58 localhost nova_compute[225585]: kvm64-v1 Feb 1 04:23:58 localhost nova_compute[225585]: n270 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: n270-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: pentium Feb 1 04:23:58 localhost nova_compute[225585]: pentium-v1 Feb 1 04:23:58 localhost nova_compute[225585]: pentium2 Feb 1 04:23:58 localhost nova_compute[225585]: pentium2-v1 Feb 1 04:23:58 localhost nova_compute[225585]: pentium3 Feb 1 04:23:58 localhost nova_compute[225585]: pentium3-v1 Feb 1 04:23:58 localhost nova_compute[225585]: phenom Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: phenom-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: qemu32 Feb 1 04:23:58 localhost nova_compute[225585]: qemu32-v1 Feb 1 04:23:58 localhost nova_compute[225585]: qemu64 Feb 1 04:23:58 localhost nova_compute[225585]: qemu64-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: file Feb 1 04:23:58 localhost nova_compute[225585]: anonymous Feb 1 04:23:58 localhost nova_compute[225585]: memfd Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: disk Feb 1 04:23:58 localhost nova_compute[225585]: cdrom Feb 1 04:23:58 localhost nova_compute[225585]: floppy Feb 1 04:23:58 localhost nova_compute[225585]: lun Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: fdc Feb 1 04:23:58 localhost nova_compute[225585]: scsi Feb 1 04:23:58 localhost nova_compute[225585]: virtio Feb 1 04:23:58 localhost nova_compute[225585]: usb Feb 1 04:23:58 localhost nova_compute[225585]: sata Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: virtio Feb 1 04:23:58 localhost nova_compute[225585]: virtio-transitional Feb 1 04:23:58 localhost nova_compute[225585]: virtio-non-transitional Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: vnc Feb 1 04:23:58 localhost nova_compute[225585]: egl-headless Feb 1 04:23:58 localhost nova_compute[225585]: dbus Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: subsystem Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: default Feb 1 04:23:58 localhost nova_compute[225585]: mandatory Feb 1 04:23:58 localhost nova_compute[225585]: requisite Feb 1 04:23:58 localhost nova_compute[225585]: optional Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: usb Feb 1 04:23:58 localhost nova_compute[225585]: pci Feb 1 04:23:58 localhost nova_compute[225585]: scsi Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: virtio Feb 1 04:23:58 localhost nova_compute[225585]: virtio-transitional Feb 1 04:23:58 localhost nova_compute[225585]: virtio-non-transitional Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: random Feb 1 04:23:58 localhost nova_compute[225585]: egd Feb 1 04:23:58 localhost nova_compute[225585]: builtin Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: path Feb 1 04:23:58 localhost nova_compute[225585]: handle Feb 1 04:23:58 localhost nova_compute[225585]: virtiofs Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: tpm-tis Feb 1 04:23:58 localhost nova_compute[225585]: tpm-crb Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: emulator Feb 1 04:23:58 localhost nova_compute[225585]: external Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 2.0 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: usb Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: pty Feb 1 04:23:58 localhost nova_compute[225585]: unix Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: qemu Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: builtin Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: default Feb 1 04:23:58 localhost nova_compute[225585]: passt Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: isa Feb 1 04:23:58 localhost nova_compute[225585]: hyperv Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: null Feb 1 04:23:58 localhost nova_compute[225585]: vc Feb 1 04:23:58 localhost nova_compute[225585]: pty Feb 1 04:23:58 localhost nova_compute[225585]: dev Feb 1 04:23:58 localhost nova_compute[225585]: file Feb 1 04:23:58 localhost nova_compute[225585]: pipe Feb 1 04:23:58 localhost nova_compute[225585]: stdio Feb 1 04:23:58 localhost nova_compute[225585]: udp Feb 1 04:23:58 localhost nova_compute[225585]: tcp Feb 1 04:23:58 localhost nova_compute[225585]: unix Feb 1 04:23:58 localhost nova_compute[225585]: qemu-vdagent Feb 1 04:23:58 localhost nova_compute[225585]: dbus Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: relaxed Feb 1 04:23:58 localhost nova_compute[225585]: vapic Feb 1 04:23:58 localhost nova_compute[225585]: spinlocks Feb 1 04:23:58 localhost nova_compute[225585]: vpindex Feb 1 04:23:58 localhost nova_compute[225585]: runtime Feb 1 04:23:58 localhost nova_compute[225585]: synic Feb 1 04:23:58 localhost nova_compute[225585]: stimer Feb 1 04:23:58 localhost nova_compute[225585]: reset Feb 1 04:23:58 localhost nova_compute[225585]: vendor_id Feb 1 04:23:58 localhost nova_compute[225585]: frequencies Feb 1 04:23:58 localhost nova_compute[225585]: reenlightenment Feb 1 04:23:58 localhost nova_compute[225585]: tlbflush Feb 1 04:23:58 localhost nova_compute[225585]: ipi Feb 1 04:23:58 localhost nova_compute[225585]: avic Feb 1 04:23:58 localhost nova_compute[225585]: emsr_bitmap Feb 1 04:23:58 localhost nova_compute[225585]: xmm_input Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 4095 Feb 1 04:23:58 localhost nova_compute[225585]: on Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: Linux KVM Hv Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.802 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: /usr/libexec/qemu-kvm Feb 1 04:23:58 localhost nova_compute[225585]: kvm Feb 1 04:23:58 localhost nova_compute[225585]: pc-i440fx-rhel7.6.0 Feb 1 04:23:58 localhost nova_compute[225585]: i686 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: rom Feb 1 04:23:58 localhost nova_compute[225585]: pflash Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: yes Feb 1 04:23:58 localhost nova_compute[225585]: no Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: no Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: on Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: on Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome Feb 1 04:23:58 localhost nova_compute[225585]: AMD Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 486 Feb 1 04:23:58 localhost nova_compute[225585]: 486-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: ClearwaterForest Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: ClearwaterForest-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Conroe Feb 1 04:23:58 localhost nova_compute[225585]: Conroe-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Cooperlake Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cooperlake-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cooperlake-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Dhyana Feb 1 04:23:58 localhost nova_compute[225585]: Dhyana-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Dhyana-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Genoa Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Genoa-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Genoa-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-IBPB Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v4 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v5 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Turin Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Turin-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v1 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v2 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v6 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v7 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: KnightsMill Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: KnightsMill-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G1-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G2 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G2-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G3 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G3-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G4-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G5-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Penryn Feb 1 04:23:58 localhost nova_compute[225585]: Penryn-v1 Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge-v1 Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge-v2 Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Client-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Skylake-Server-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Snowridge-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Westmere Feb 1 04:23:58 localhost nova_compute[225585]: Westmere-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Westmere-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Westmere-v2 Feb 1 04:23:58 localhost nova_compute[225585]: athlon Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: athlon-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: core2duo Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: core2duo-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: coreduo Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: coreduo-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: kvm32 Feb 1 04:23:58 localhost nova_compute[225585]: kvm32-v1 Feb 1 04:23:58 localhost nova_compute[225585]: kvm64 Feb 1 04:23:58 localhost nova_compute[225585]: kvm64-v1 Feb 1 04:23:58 localhost nova_compute[225585]: n270 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: n270-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: pentium Feb 1 04:23:58 localhost nova_compute[225585]: pentium-v1 Feb 1 04:23:58 localhost nova_compute[225585]: pentium2 Feb 1 04:23:58 localhost nova_compute[225585]: pentium2-v1 Feb 1 04:23:58 localhost nova_compute[225585]: pentium3 Feb 1 04:23:58 localhost nova_compute[225585]: pentium3-v1 Feb 1 04:23:58 localhost nova_compute[225585]: phenom Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: phenom-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: qemu32 Feb 1 04:23:58 localhost nova_compute[225585]: qemu32-v1 Feb 1 04:23:58 localhost nova_compute[225585]: qemu64 Feb 1 04:23:58 localhost nova_compute[225585]: qemu64-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: file Feb 1 04:23:58 localhost nova_compute[225585]: anonymous Feb 1 04:23:58 localhost nova_compute[225585]: memfd Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: disk Feb 1 04:23:58 localhost nova_compute[225585]: cdrom Feb 1 04:23:58 localhost nova_compute[225585]: floppy Feb 1 04:23:58 localhost nova_compute[225585]: lun Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: ide Feb 1 04:23:58 localhost nova_compute[225585]: fdc Feb 1 04:23:58 localhost nova_compute[225585]: scsi Feb 1 04:23:58 localhost nova_compute[225585]: virtio Feb 1 04:23:58 localhost nova_compute[225585]: usb Feb 1 04:23:58 localhost nova_compute[225585]: sata Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: virtio Feb 1 04:23:58 localhost nova_compute[225585]: virtio-transitional Feb 1 04:23:58 localhost nova_compute[225585]: virtio-non-transitional Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: vnc Feb 1 04:23:58 localhost nova_compute[225585]: egl-headless Feb 1 04:23:58 localhost nova_compute[225585]: dbus Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: subsystem Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: default Feb 1 04:23:58 localhost nova_compute[225585]: mandatory Feb 1 04:23:58 localhost nova_compute[225585]: requisite Feb 1 04:23:58 localhost nova_compute[225585]: optional Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: usb Feb 1 04:23:58 localhost nova_compute[225585]: pci Feb 1 04:23:58 localhost nova_compute[225585]: scsi Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: virtio Feb 1 04:23:58 localhost nova_compute[225585]: virtio-transitional Feb 1 04:23:58 localhost nova_compute[225585]: virtio-non-transitional Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: random Feb 1 04:23:58 localhost nova_compute[225585]: egd Feb 1 04:23:58 localhost nova_compute[225585]: builtin Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: path Feb 1 04:23:58 localhost nova_compute[225585]: handle Feb 1 04:23:58 localhost nova_compute[225585]: virtiofs Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: tpm-tis Feb 1 04:23:58 localhost nova_compute[225585]: tpm-crb Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: emulator Feb 1 04:23:58 localhost nova_compute[225585]: external Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 2.0 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: usb Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: pty Feb 1 04:23:58 localhost nova_compute[225585]: unix Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: qemu Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: builtin Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: default Feb 1 04:23:58 localhost nova_compute[225585]: passt Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: isa Feb 1 04:23:58 localhost nova_compute[225585]: hyperv Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: null Feb 1 04:23:58 localhost nova_compute[225585]: vc Feb 1 04:23:58 localhost nova_compute[225585]: pty Feb 1 04:23:58 localhost nova_compute[225585]: dev Feb 1 04:23:58 localhost nova_compute[225585]: file Feb 1 04:23:58 localhost nova_compute[225585]: pipe Feb 1 04:23:58 localhost nova_compute[225585]: stdio Feb 1 04:23:58 localhost nova_compute[225585]: udp Feb 1 04:23:58 localhost nova_compute[225585]: tcp Feb 1 04:23:58 localhost nova_compute[225585]: unix Feb 1 04:23:58 localhost nova_compute[225585]: qemu-vdagent Feb 1 04:23:58 localhost nova_compute[225585]: dbus Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: relaxed Feb 1 04:23:58 localhost nova_compute[225585]: vapic Feb 1 04:23:58 localhost nova_compute[225585]: spinlocks Feb 1 04:23:58 localhost nova_compute[225585]: vpindex Feb 1 04:23:58 localhost nova_compute[225585]: runtime Feb 1 04:23:58 localhost nova_compute[225585]: synic Feb 1 04:23:58 localhost nova_compute[225585]: stimer Feb 1 04:23:58 localhost nova_compute[225585]: reset Feb 1 04:23:58 localhost nova_compute[225585]: vendor_id Feb 1 04:23:58 localhost nova_compute[225585]: frequencies Feb 1 04:23:58 localhost nova_compute[225585]: reenlightenment Feb 1 04:23:58 localhost nova_compute[225585]: tlbflush Feb 1 04:23:58 localhost nova_compute[225585]: ipi Feb 1 04:23:58 localhost nova_compute[225585]: avic Feb 1 04:23:58 localhost nova_compute[225585]: emsr_bitmap Feb 1 04:23:58 localhost nova_compute[225585]: xmm_input Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 4095 Feb 1 04:23:58 localhost nova_compute[225585]: on Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: Linux KVM Hv Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.873 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 1 04:23:58 localhost nova_compute[225585]: 2026-02-01 09:23:58.879 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: /usr/libexec/qemu-kvm Feb 1 04:23:58 localhost nova_compute[225585]: kvm Feb 1 04:23:58 localhost nova_compute[225585]: pc-q35-rhel9.8.0 Feb 1 04:23:58 localhost nova_compute[225585]: x86_64 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: efi Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 1 04:23:58 localhost nova_compute[225585]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 1 04:23:58 localhost nova_compute[225585]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 1 04:23:58 localhost nova_compute[225585]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: rom Feb 1 04:23:58 localhost nova_compute[225585]: pflash Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: yes Feb 1 04:23:58 localhost nova_compute[225585]: no Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: yes Feb 1 04:23:58 localhost nova_compute[225585]: no Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: on Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: on Feb 1 04:23:58 localhost nova_compute[225585]: off Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome Feb 1 04:23:58 localhost nova_compute[225585]: AMD Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: 486 Feb 1 04:23:58 localhost nova_compute[225585]: 486-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Broadwell-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cascadelake-Server-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: ClearwaterForest Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: ClearwaterForest-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Conroe Feb 1 04:23:58 localhost nova_compute[225585]: Conroe-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Cooperlake Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cooperlake-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Cooperlake-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Denverton-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Dhyana Feb 1 04:23:58 localhost nova_compute[225585]: Dhyana-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Dhyana-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Genoa Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Genoa-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Genoa-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-IBPB Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Milan-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v4 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Rome-v5 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Turin Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-Turin-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v1 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v2 Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: EPYC-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: GraniteRapids-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-noTSX-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Haswell-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-noTSX Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v6 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Icelake-Server-v7 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: IvyBridge-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: KnightsMill Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: KnightsMill-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Nehalem-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G1-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G2 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G2-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G3 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G3-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G4-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G5 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Opteron_G5-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Penryn Feb 1 04:23:58 localhost nova_compute[225585]: Penryn-v1 Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge-IBRS Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge-v1 Feb 1 04:23:58 localhost nova_compute[225585]: SandyBridge-v2 Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v3 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SapphireRapids-v4 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest-v1 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: SierraForest-v2 Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:58 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: SierraForest-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Client Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Client-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Client-noTSX-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Client-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Client-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Client-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Client-v4 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Server Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Server-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Server-noTSX-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Server-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Server-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Server-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Server-v4 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Skylake-Server-v5 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Snowridge Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Snowridge-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Snowridge-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Snowridge-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Snowridge-v4 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Westmere Feb 1 04:23:59 localhost nova_compute[225585]: Westmere-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Westmere-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Westmere-v2 Feb 1 04:23:59 localhost nova_compute[225585]: athlon Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: athlon-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: core2duo Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: core2duo-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: coreduo Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: coreduo-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: kvm32 Feb 1 04:23:59 localhost nova_compute[225585]: kvm32-v1 Feb 1 04:23:59 localhost nova_compute[225585]: kvm64 Feb 1 04:23:59 localhost nova_compute[225585]: kvm64-v1 Feb 1 04:23:59 localhost nova_compute[225585]: n270 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: n270-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: pentium Feb 1 04:23:59 localhost nova_compute[225585]: pentium-v1 Feb 1 04:23:59 localhost nova_compute[225585]: pentium2 Feb 1 04:23:59 localhost nova_compute[225585]: pentium2-v1 Feb 1 04:23:59 localhost nova_compute[225585]: pentium3 Feb 1 04:23:59 localhost nova_compute[225585]: pentium3-v1 Feb 1 04:23:59 localhost nova_compute[225585]: phenom Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: phenom-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: qemu32 Feb 1 04:23:59 localhost nova_compute[225585]: qemu32-v1 Feb 1 04:23:59 localhost nova_compute[225585]: qemu64 Feb 1 04:23:59 localhost nova_compute[225585]: qemu64-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: file Feb 1 04:23:59 localhost nova_compute[225585]: anonymous Feb 1 04:23:59 localhost nova_compute[225585]: memfd Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: disk Feb 1 04:23:59 localhost nova_compute[225585]: cdrom Feb 1 04:23:59 localhost nova_compute[225585]: floppy Feb 1 04:23:59 localhost nova_compute[225585]: lun Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: fdc Feb 1 04:23:59 localhost nova_compute[225585]: scsi Feb 1 04:23:59 localhost nova_compute[225585]: virtio Feb 1 04:23:59 localhost nova_compute[225585]: usb Feb 1 04:23:59 localhost nova_compute[225585]: sata Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: virtio Feb 1 04:23:59 localhost nova_compute[225585]: virtio-transitional Feb 1 04:23:59 localhost nova_compute[225585]: virtio-non-transitional Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: vnc Feb 1 04:23:59 localhost nova_compute[225585]: egl-headless Feb 1 04:23:59 localhost nova_compute[225585]: dbus Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: subsystem Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: default Feb 1 04:23:59 localhost nova_compute[225585]: mandatory Feb 1 04:23:59 localhost nova_compute[225585]: requisite Feb 1 04:23:59 localhost nova_compute[225585]: optional Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: usb Feb 1 04:23:59 localhost nova_compute[225585]: pci Feb 1 04:23:59 localhost nova_compute[225585]: scsi Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: virtio Feb 1 04:23:59 localhost nova_compute[225585]: virtio-transitional Feb 1 04:23:59 localhost nova_compute[225585]: virtio-non-transitional Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: random Feb 1 04:23:59 localhost nova_compute[225585]: egd Feb 1 04:23:59 localhost nova_compute[225585]: builtin Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: path Feb 1 04:23:59 localhost nova_compute[225585]: handle Feb 1 04:23:59 localhost nova_compute[225585]: virtiofs Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: tpm-tis Feb 1 04:23:59 localhost nova_compute[225585]: tpm-crb Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: emulator Feb 1 04:23:59 localhost nova_compute[225585]: external Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: 2.0 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: usb Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: pty Feb 1 04:23:59 localhost nova_compute[225585]: unix Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: qemu Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: builtin Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: default Feb 1 04:23:59 localhost nova_compute[225585]: passt Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: isa Feb 1 04:23:59 localhost nova_compute[225585]: hyperv Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: null Feb 1 04:23:59 localhost nova_compute[225585]: vc Feb 1 04:23:59 localhost nova_compute[225585]: pty Feb 1 04:23:59 localhost nova_compute[225585]: dev Feb 1 04:23:59 localhost nova_compute[225585]: file Feb 1 04:23:59 localhost nova_compute[225585]: pipe Feb 1 04:23:59 localhost nova_compute[225585]: stdio Feb 1 04:23:59 localhost nova_compute[225585]: udp Feb 1 04:23:59 localhost nova_compute[225585]: tcp Feb 1 04:23:59 localhost nova_compute[225585]: unix Feb 1 04:23:59 localhost nova_compute[225585]: qemu-vdagent Feb 1 04:23:59 localhost nova_compute[225585]: dbus Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: relaxed Feb 1 04:23:59 localhost nova_compute[225585]: vapic Feb 1 04:23:59 localhost nova_compute[225585]: spinlocks Feb 1 04:23:59 localhost nova_compute[225585]: vpindex Feb 1 04:23:59 localhost nova_compute[225585]: runtime Feb 1 04:23:59 localhost nova_compute[225585]: synic Feb 1 04:23:59 localhost nova_compute[225585]: stimer Feb 1 04:23:59 localhost nova_compute[225585]: reset Feb 1 04:23:59 localhost nova_compute[225585]: vendor_id Feb 1 04:23:59 localhost nova_compute[225585]: frequencies Feb 1 04:23:59 localhost nova_compute[225585]: reenlightenment Feb 1 04:23:59 localhost nova_compute[225585]: tlbflush Feb 1 04:23:59 localhost nova_compute[225585]: ipi Feb 1 04:23:59 localhost nova_compute[225585]: avic Feb 1 04:23:59 localhost nova_compute[225585]: emsr_bitmap Feb 1 04:23:59 localhost nova_compute[225585]: xmm_input Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: 4095 Feb 1 04:23:59 localhost nova_compute[225585]: on Feb 1 04:23:59 localhost nova_compute[225585]: off Feb 1 04:23:59 localhost nova_compute[225585]: off Feb 1 04:23:59 localhost nova_compute[225585]: Linux KVM Hv Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:23:59 localhost nova_compute[225585]: 2026-02-01 09:23:58.944 225589 DEBUG nova.virt.libvirt.host [None req-c8935b5f-8258-4f7b-b734-d073ac8ab747 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: /usr/libexec/qemu-kvm Feb 1 04:23:59 localhost nova_compute[225585]: kvm Feb 1 04:23:59 localhost nova_compute[225585]: pc-i440fx-rhel7.6.0 Feb 1 04:23:59 localhost nova_compute[225585]: x86_64 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: rom Feb 1 04:23:59 localhost nova_compute[225585]: pflash Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: yes Feb 1 04:23:59 localhost nova_compute[225585]: no Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: no Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: on Feb 1 04:23:59 localhost nova_compute[225585]: off Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: on Feb 1 04:23:59 localhost nova_compute[225585]: off Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Rome Feb 1 04:23:59 localhost nova_compute[225585]: AMD Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: 486 Feb 1 04:23:59 localhost nova_compute[225585]: 486-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Broadwell Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Broadwell-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Broadwell-noTSX Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Broadwell-noTSX-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Broadwell-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Broadwell-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Broadwell-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Broadwell-v4 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Cascadelake-Server Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Cascadelake-Server-noTSX Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Cascadelake-Server-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Cascadelake-Server-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Cascadelake-Server-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Cascadelake-Server-v4 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Cascadelake-Server-v5 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: ClearwaterForest Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: ClearwaterForest-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Conroe Feb 1 04:23:59 localhost nova_compute[225585]: Conroe-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Cooperlake Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Cooperlake-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Cooperlake-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Denverton Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Denverton-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Denverton-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Denverton-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Dhyana Feb 1 04:23:59 localhost nova_compute[225585]: Dhyana-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Dhyana-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Genoa Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Genoa-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Genoa-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-IBPB Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Milan Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Milan-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Milan-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Milan-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Rome Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Rome-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Rome-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Rome-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Rome-v4 Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Rome-v5 Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Turin Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-Turin-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-v1 Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-v2 Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-v4 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: EPYC-v5 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: GraniteRapids Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: GraniteRapids-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: GraniteRapids-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: GraniteRapids-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Haswell Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Haswell-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Haswell-noTSX Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Haswell-noTSX-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Haswell-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Haswell-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Haswell-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Haswell-v4 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Icelake-Server Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Icelake-Server-noTSX Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Icelake-Server-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Icelake-Server-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Icelake-Server-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Icelake-Server-v4 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Icelake-Server-v5 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Icelake-Server-v6 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Icelake-Server-v7 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: IvyBridge Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: IvyBridge-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: IvyBridge-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: IvyBridge-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: KnightsMill Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: KnightsMill-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Nehalem Feb 1 04:23:59 localhost nova_compute[225585]: Nehalem-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: Nehalem-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Nehalem-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G1 Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G1-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G2 Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G2-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G3 Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G3-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G4 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G4-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G5 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Opteron_G5-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Penryn Feb 1 04:23:59 localhost nova_compute[225585]: Penryn-v1 Feb 1 04:23:59 localhost nova_compute[225585]: SandyBridge Feb 1 04:23:59 localhost nova_compute[225585]: SandyBridge-IBRS Feb 1 04:23:59 localhost nova_compute[225585]: SandyBridge-v1 Feb 1 04:23:59 localhost nova_compute[225585]: SandyBridge-v2 Feb 1 04:23:59 localhost nova_compute[225585]: SapphireRapids Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: SapphireRapids-v1 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: SapphireRapids-v2 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: SapphireRapids-v3 Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:23:59 localhost nova_compute[225585]: Feb 1 04:26:53 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:26:53 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:26:53 localhost rsyslogd[760]: imjournal: 2179 messages lost due to rate-limiting (20000 allowed within 600 seconds) Feb 1 04:26:53 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:26:53 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:26:53 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:26:53 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Failed with result 'exit-code'. Feb 1 04:26:54 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:26:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:26:54 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:26:54 localhost podman[239932]: 2026-02-01 09:26:54.766897152 +0000 UTC m=+0.086776647 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 1 04:26:54 localhost podman[239932]: 2026-02-01 09:26:54.771628508 +0000 UTC m=+0.091508053 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Feb 1 04:26:54 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:26:55 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:26:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39524 DF PROTO=TCP SPT=33450 DPT=9101 SEQ=438668524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660190D0000000001030307) Feb 1 04:26:55 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:26:55 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:26:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:26:56 localhost podman[239949]: 2026-02-01 09:26:56.620239406 +0000 UTC m=+0.081493533 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:26:56 localhost podman[239949]: 2026-02-01 09:26:56.652184997 +0000 UTC m=+0.113439114 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:26:57 localhost nova_compute[225585]: 2026-02-01 09:26:57.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:26:58 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 1 04:26:58 localhost systemd[1]: var-lib-containers-storage-overlay-1d315715373fb2ed69473b661022a322c730f5613516f294042e6eac2843e9be-merged.mount: Deactivated successfully. Feb 1 04:26:58 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:26:58 localhost nova_compute[225585]: 2026-02-01 09:26:58.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:26:58 localhost nova_compute[225585]: 2026-02-01 09:26:58.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:26:59 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:26:59 localhost systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully. Feb 1 04:26:59 localhost systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully. Feb 1 04:26:59 localhost nova_compute[225585]: 2026-02-01 09:26:59.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:27:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27947 DF PROTO=TCP SPT=54428 DPT=9882 SEQ=2863979396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6602A0D0000000001030307) Feb 1 04:27:00 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:00 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:00 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:00 localhost nova_compute[225585]: 2026-02-01 09:27:00.990 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:27:00 localhost nova_compute[225585]: 2026-02-01 09:27:00.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:27:00 localhost nova_compute[225585]: 2026-02-01 09:27:00.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:27:00 localhost nova_compute[225585]: 2026-02-01 09:27:00.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.019 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.019 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.020 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.020 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.020 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:27:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27948 DF PROTO=TCP SPT=54428 DPT=9882 SEQ=2863979396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6602E0D0000000001030307) Feb 1 04:27:01 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:01 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.465 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:27:01 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.636 225589 WARNING nova.virt.libvirt.driver [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.637 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=13257MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.637 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.637 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.735 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.735 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:27:01 localhost nova_compute[225585]: 2026-02-01 09:27:01.766 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:27:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:27:02 localhost podman[240011]: 2026-02-01 09:27:02.116442005 +0000 UTC m=+0.085683361 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, release=1769056855, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:27:02 localhost podman[240011]: 2026-02-01 09:27:02.130579961 +0000 UTC m=+0.099821307 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2026-01-22T05:09:47Z, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, io.openshift.tags=minimal rhel9) Feb 1 04:27:02 localhost nova_compute[225585]: 2026-02-01 09:27:02.259 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:27:02 localhost nova_compute[225585]: 2026-02-01 09:27:02.265 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:27:02 localhost nova_compute[225585]: 2026-02-01 09:27:02.287 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:27:02 localhost nova_compute[225585]: 2026-02-01 09:27:02.290 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:27:02 localhost nova_compute[225585]: 2026-02-01 09:27:02.291 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:27:03 localhost systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully. Feb 1 04:27:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27949 DF PROTO=TCP SPT=54428 DPT=9882 SEQ=2863979396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660360D0000000001030307) Feb 1 04:27:03 localhost systemd[1]: var-lib-containers-storage-overlay-311e95bb3bdbe1bb40730cbc80ffa3861fcafc1265b18b49a2e8169fc5d3cbf1-merged.mount: Deactivated successfully. Feb 1 04:27:03 localhost systemd[1]: var-lib-containers-storage-overlay-311e95bb3bdbe1bb40730cbc80ffa3861fcafc1265b18b49a2e8169fc5d3cbf1-merged.mount: Deactivated successfully. Feb 1 04:27:03 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:27:03 localhost nova_compute[225585]: 2026-02-01 09:27:03.291 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:27:03 localhost nova_compute[225585]: 2026-02-01 09:27:03.292 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:27:03 localhost nova_compute[225585]: 2026-02-01 09:27:03.292 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:27:03 localhost nova_compute[225585]: 2026-02-01 09:27:03.311 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:27:03 localhost nova_compute[225585]: 2026-02-01 09:27:03.312 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.398 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:27:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:27:04 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:04 localhost systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully. Feb 1 04:27:04 localhost systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully. Feb 1 04:27:05 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:05 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:05 localhost sshd[240033]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:27:05 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:06 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32382 DF PROTO=TCP SPT=50226 DPT=9102 SEQ=2308247506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660438D0000000001030307) Feb 1 04:27:06 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:06 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:07 localhost systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully. Feb 1 04:27:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:27:07 localhost systemd[1]: var-lib-containers-storage-overlay-179e7ed4ab403439e752a2c426c6db4ca9807018662c061e320fe01562a6e116-merged.mount: Deactivated successfully. Feb 1 04:27:07 localhost podman[240035]: 2026-02-01 09:27:07.866273452 +0000 UTC m=+0.101817992 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:27:07 localhost podman[240035]: 2026-02-01 09:27:07.895830936 +0000 UTC m=+0.131375426 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 1 04:27:07 localhost podman[240035]: unhealthy Feb 1 04:27:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62840 DF PROTO=TCP SPT=32930 DPT=9100 SEQ=1332322599 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660500D0000000001030307) Feb 1 04:27:10 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:10 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:10 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:10 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:27:10 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Failed with result 'exit-code'. Feb 1 04:27:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39525 DF PROTO=TCP SPT=33450 DPT=9101 SEQ=438668524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660590E0000000001030307) Feb 1 04:27:12 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:12 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:12 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:13 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:13 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:13 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:14 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:14 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:14 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27951 DF PROTO=TCP SPT=54428 DPT=9882 SEQ=2863979396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660670D0000000001030307) Feb 1 04:27:16 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:17 localhost systemd[1]: var-lib-containers-storage-overlay-901f926467172f87fed8e093a0c623b4edfdf674c0cbe61bc939afde2d57f8c6-merged.mount: Deactivated successfully. Feb 1 04:27:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32384 DF PROTO=TCP SPT=50226 DPT=9102 SEQ=2308247506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660730E0000000001030307) Feb 1 04:27:19 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:19 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:19 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:21 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:21 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:21 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62842 DF PROTO=TCP SPT=32930 DPT=9100 SEQ=1332322599 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660810E0000000001030307) Feb 1 04:27:22 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:22 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:22 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:23 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:23 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:27:23 localhost podman[240053]: 2026-02-01 09:27:23.835122487 +0000 UTC m=+0.070626629 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:27:23 localhost podman[240053]: 2026-02-01 09:27:23.847591974 +0000 UTC m=+0.083096156 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:27:23 localhost podman[240053]: unhealthy Feb 1 04:27:23 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:27:23 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Failed with result 'exit-code'. Feb 1 04:27:23 localhost podman[240054]: 2026-02-01 09:27:23.848606705 +0000 UTC m=+0.078039039 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller) Feb 1 04:27:23 localhost podman[240054]: 2026-02-01 09:27:23.928184923 +0000 UTC m=+0.157617267 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:27:24 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19941 DF PROTO=TCP SPT=44490 DPT=9101 SEQ=2930761123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6608D0D0000000001030307) Feb 1 04:27:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:27:26 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:26 localhost systemd[1]: var-lib-containers-storage-overlay-671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce-merged.mount: Deactivated successfully. Feb 1 04:27:26 localhost systemd[1]: var-lib-containers-storage-overlay-671a12c1b149c45f560a497746a5c06b1baf4bea205bfa54dc10c3d286f5bbce-merged.mount: Deactivated successfully. Feb 1 04:27:26 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:27:26 localhost podman[240101]: 2026-02-01 09:27:26.364841983 +0000 UTC m=+0.577753867 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:27:26 localhost podman[240101]: 2026-02-01 09:27:26.398851188 +0000 UTC m=+0.611763122 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent) Feb 1 04:27:28 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:27:28 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:28 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:28 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:27:28 localhost podman[240119]: 2026-02-01 09:27:28.926987005 +0000 UTC m=+0.297778630 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:27:28 localhost podman[240119]: 2026-02-01 09:27:28.937743938 +0000 UTC m=+0.308535573 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:27:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5831 DF PROTO=TCP SPT=49530 DPT=9882 SEQ=2860037822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6609F3C0000000001030307) Feb 1 04:27:30 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:30 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:30 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:30 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:27:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5832 DF PROTO=TCP SPT=49530 DPT=9882 SEQ=2860037822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660A34E0000000001030307) Feb 1 04:27:31 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:31 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:32 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:32 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:32 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5833 DF PROTO=TCP SPT=49530 DPT=9882 SEQ=2860037822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660AB4D0000000001030307) Feb 1 04:27:33 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:27:33 localhost podman[240144]: 2026-02-01 09:27:33.381320792 +0000 UTC m=+0.076514462 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, release=1769056855, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, vendor=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, name=ubi9/ubi-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:27:33 localhost podman[240144]: 2026-02-01 09:27:33.391284361 +0000 UTC m=+0.086478031 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1769056855, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, managed_by=edpm_ansible) Feb 1 04:27:33 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:35 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:35 localhost systemd[1]: var-lib-containers-storage-overlay-50254bf8e87a075d183197f5531e6c0f97888346b53b5d118b5ece2506404cbc-merged.mount: Deactivated successfully. Feb 1 04:27:35 localhost systemd[1]: var-lib-containers-storage-overlay-50254bf8e87a075d183197f5531e6c0f97888346b53b5d118b5ece2506404cbc-merged.mount: Deactivated successfully. Feb 1 04:27:35 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:27:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44022 DF PROTO=TCP SPT=39798 DPT=9102 SEQ=2139760279 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660B8CD0000000001030307) Feb 1 04:27:36 localhost systemd[1]: var-lib-containers-storage-overlay-4e140ff8bd9a23afd7615f9a56d0521b539df31404142c6e9fb61e24e5e6cdd4-merged.mount: Deactivated successfully. Feb 1 04:27:36 localhost systemd[1]: var-lib-containers-storage-overlay-b15384c0932804d65f1aed603e616839f92c3a386a4220ff9b424e6f3ffa126e-merged.mount: Deactivated successfully. Feb 1 04:27:37 localhost systemd[1]: var-lib-containers-storage-overlay-b15384c0932804d65f1aed603e616839f92c3a386a4220ff9b424e6f3ffa126e-merged.mount: Deactivated successfully. Feb 1 04:27:38 localhost systemd[1]: var-lib-containers-storage-overlay-b15384c0932804d65f1aed603e616839f92c3a386a4220ff9b424e6f3ffa126e-merged.mount: Deactivated successfully. Feb 1 04:27:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10994 DF PROTO=TCP SPT=35626 DPT=9100 SEQ=1453490941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660C54D0000000001030307) Feb 1 04:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:27:40 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:40 localhost podman[240164]: 2026-02-01 09:27:40.876383564 +0000 UTC m=+0.087596666 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 1 04:27:40 localhost podman[240164]: 2026-02-01 09:27:40.90982413 +0000 UTC m=+0.121037212 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:27:40 localhost podman[240164]: unhealthy Feb 1 04:27:40 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:41 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:27:41 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Failed with result 'exit-code'. Feb 1 04:27:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:27:41.745 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:27:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:27:41.746 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:27:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:27:41.746 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:27:41 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:42 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:42 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5892 DF PROTO=TCP SPT=41892 DPT=9101 SEQ=840572674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660D28D0000000001030307) Feb 1 04:27:43 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:27:44 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:44 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:44 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:27:45 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:27:45 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:45 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:27:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5835 DF PROTO=TCP SPT=49530 DPT=9882 SEQ=2860037822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660DB0E0000000001030307) Feb 1 04:27:48 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:27:48 localhost systemd[1]: var-lib-containers-storage-overlay-42956910233e56c0615893b331e8357f0bd5264eb11a7b97d46d18517d01f2f9-merged.mount: Deactivated successfully. Feb 1 04:27:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44024 DF PROTO=TCP SPT=39798 DPT=9102 SEQ=2139760279 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660E90D0000000001030307) Feb 1 04:27:49 localhost systemd[1]: var-lib-containers-storage-overlay-4e140ff8bd9a23afd7615f9a56d0521b539df31404142c6e9fb61e24e5e6cdd4-merged.mount: Deactivated successfully. Feb 1 04:27:49 localhost systemd[1]: var-lib-containers-storage-overlay-b15384c0932804d65f1aed603e616839f92c3a386a4220ff9b424e6f3ffa126e-merged.mount: Deactivated successfully. Feb 1 04:27:49 localhost systemd[1]: var-lib-containers-storage-overlay-b15384c0932804d65f1aed603e616839f92c3a386a4220ff9b424e6f3ffa126e-merged.mount: Deactivated successfully. Feb 1 04:27:49 localhost systemd[1]: var-lib-containers-storage-overlay-4e140ff8bd9a23afd7615f9a56d0521b539df31404142c6e9fb61e24e5e6cdd4-merged.mount: Deactivated successfully. Feb 1 04:27:50 localhost systemd[1]: var-lib-containers-storage-overlay-b15384c0932804d65f1aed603e616839f92c3a386a4220ff9b424e6f3ffa126e-merged.mount: Deactivated successfully. Feb 1 04:27:51 localhost systemd[1]: var-lib-containers-storage-overlay-c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6-merged.mount: Deactivated successfully. Feb 1 04:27:51 localhost systemd[1]: var-lib-containers-storage-overlay-71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9-merged.mount: Deactivated successfully. Feb 1 04:27:51 localhost systemd[1]: var-lib-containers-storage-overlay-71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9-merged.mount: Deactivated successfully. Feb 1 04:27:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10996 DF PROTO=TCP SPT=35626 DPT=9100 SEQ=1453490941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA660F50D0000000001030307) Feb 1 04:27:53 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 1 04:27:53 localhost systemd[1]: var-lib-containers-storage-overlay-c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6-merged.mount: Deactivated successfully. Feb 1 04:27:53 localhost systemd[1]: var-lib-containers-storage-overlay-c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6-merged.mount: Deactivated successfully. Feb 1 04:27:53 localhost systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully. Feb 1 04:27:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:27:53 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 1 04:27:54 localhost podman[240182]: 2026-02-01 09:27:54.064507406 +0000 UTC m=+0.082429495 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:27:54 localhost podman[240182]: 2026-02-01 09:27:54.076631302 +0000 UTC m=+0.094553411 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:27:54 localhost podman[240182]: unhealthy Feb 1 04:27:54 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:27:54 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Failed with result 'exit-code'. Feb 1 04:27:54 localhost systemd[1]: session-55.scope: Deactivated successfully. Feb 1 04:27:54 localhost systemd[1]: session-55.scope: Consumed 1min 9.186s CPU time. Feb 1 04:27:54 localhost systemd-logind[761]: Session 55 logged out. Waiting for processes to exit. Feb 1 04:27:54 localhost systemd-logind[761]: Removed session 55. Feb 1 04:27:54 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 1 04:27:54 localhost systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully. Feb 1 04:27:55 localhost systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully. Feb 1 04:27:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5894 DF PROTO=TCP SPT=41892 DPT=9101 SEQ=840572674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661030E0000000001030307) Feb 1 04:27:56 localhost systemd[1]: var-lib-containers-storage-overlay-71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9-merged.mount: Deactivated successfully. Feb 1 04:27:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:27:57 localhost nova_compute[225585]: 2026-02-01 09:27:57.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:27:58 localhost systemd[1]: var-lib-containers-storage-overlay-4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc-merged.mount: Deactivated successfully. Feb 1 04:27:58 localhost systemd[1]: var-lib-containers-storage-overlay-3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976-merged.mount: Deactivated successfully. Feb 1 04:27:58 localhost podman[240206]: 2026-02-01 09:27:58.388869456 +0000 UTC m=+1.970463155 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 04:27:58 localhost podman[240206]: 2026-02-01 09:27:58.492448876 +0000 UTC m=+2.074042635 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:27:58 localhost nova_compute[225585]: 2026-02-01 09:27:58.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:27:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:28:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55978 DF PROTO=TCP SPT=44164 DPT=9882 SEQ=888582071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661146D0000000001030307) Feb 1 04:28:00 localhost systemd[1]: var-lib-containers-storage-overlay-33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe-merged.mount: Deactivated successfully. Feb 1 04:28:00 localhost systemd[1]: var-lib-containers-storage-overlay-4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc-merged.mount: Deactivated successfully. Feb 1 04:28:00 localhost systemd[1]: var-lib-containers-storage-overlay-4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc-merged.mount: Deactivated successfully. Feb 1 04:28:00 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:28:00 localhost nova_compute[225585]: 2026-02-01 09:28:00.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:00 localhost nova_compute[225585]: 2026-02-01 09:28:00.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:00 localhost nova_compute[225585]: 2026-02-01 09:28:00.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:00 localhost nova_compute[225585]: 2026-02-01 09:28:00.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:28:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:28:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55979 DF PROTO=TCP SPT=44164 DPT=9882 SEQ=888582071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661188D0000000001030307) Feb 1 04:28:01 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 1 04:28:01 localhost systemd[1]: var-lib-containers-storage-overlay-33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe-merged.mount: Deactivated successfully. Feb 1 04:28:02 localhost systemd[1]: var-lib-containers-storage-overlay-33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe-merged.mount: Deactivated successfully. Feb 1 04:28:02 localhost podman[240258]: 2026-02-01 09:28:02.311894445 +0000 UTC m=+1.273740198 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:28:02 localhost podman[240258]: 2026-02-01 09:28:02.347350824 +0000 UTC m=+1.309196557 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:28:02 localhost nova_compute[225585]: 2026-02-01 09:28:02.990 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:02 localhost nova_compute[225585]: 2026-02-01 09:28:02.991 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.019 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.019 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.020 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.039 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.039 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.039 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.060 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.060 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.061 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.061 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.061 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:28:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55980 DF PROTO=TCP SPT=44164 DPT=9882 SEQ=888582071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661208D0000000001030307) Feb 1 04:28:03 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:28:03 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 1 04:28:03 localhost systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully. Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.552 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:28:03 localhost systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully. Feb 1 04:28:03 localhost systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully. Feb 1 04:28:03 localhost podman[240319]: 2026-02-01 09:28:03.713702233 +0000 UTC m=+0.093370915 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, distribution-scope=public, GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, release=1764794109, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container) Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.744 225589 WARNING nova.virt.libvirt.driver [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.745 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=13148MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.745 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.745 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.816 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.817 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:28:03 localhost podman[240319]: 2026-02-01 09:28:03.822830726 +0000 UTC m=+0.202499478 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, name=rhceph, build-date=2025-12-08T17:28:53Z, RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Feb 1 04:28:03 localhost nova_compute[225585]: 2026-02-01 09:28:03.848 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:28:04 localhost nova_compute[225585]: 2026-02-01 09:28:04.301 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:28:04 localhost nova_compute[225585]: 2026-02-01 09:28:04.307 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:28:04 localhost nova_compute[225585]: 2026-02-01 09:28:04.323 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:28:04 localhost nova_compute[225585]: 2026-02-01 09:28:04.325 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:28:04 localhost nova_compute[225585]: 2026-02-01 09:28:04.326 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:28:05 localhost systemd[1]: var-lib-containers-storage-overlay-3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976-merged.mount: Deactivated successfully. Feb 1 04:28:05 localhost systemd[1]: tmp-crun.KZtXMK.mount: Deactivated successfully. Feb 1 04:28:05 localhost podman[240246]: 2026-02-01 09:28:05.666793687 +0000 UTC m=+6.635262574 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:28:05 localhost podman[240246]: 2026-02-01 09:28:05.702658339 +0000 UTC m=+6.671127236 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:28:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:28:06 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:28:06 localhost podman[240392]: 2026-02-01 09:28:06.539769514 +0000 UTC m=+0.129565507 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, container_name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, config_id=openstack_network_exporter, release=1769056855, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 04:28:06 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 1 04:28:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47602 DF PROTO=TCP SPT=37270 DPT=9102 SEQ=3306673345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6612E0D0000000001030307) Feb 1 04:28:06 localhost podman[240392]: 2026-02-01 09:28:06.585533183 +0000 UTC m=+0.175329166 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, build-date=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, release=1769056855, version=9.7, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:28:07 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:07 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:07 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:07 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:28:07 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:08 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 1 04:28:08 localhost systemd[1]: var-lib-containers-storage-overlay-193d63b6dd9579507d9f1518ccbcb97a99c18e05e53fbccdc25e375b68ff02d6-merged.mount: Deactivated successfully. Feb 1 04:28:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58713 DF PROTO=TCP SPT=49182 DPT=9100 SEQ=2712581907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6613A8D0000000001030307) Feb 1 04:28:10 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:28:10 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:28:10 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:28:11 localhost podman[240498]: 2026-02-01 09:28:11.620693322 +0000 UTC m=+0.085507551 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Feb 1 04:28:11 localhost podman[240498]: 2026-02-01 09:28:11.652465767 +0000 UTC m=+0.117280016 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 1 04:28:11 localhost podman[240498]: unhealthy Feb 1 04:28:12 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:12 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:28:12 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:28:12 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:28:12 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Failed with result 'exit-code'. Feb 1 04:28:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53483 DF PROTO=TCP SPT=55396 DPT=9101 SEQ=2203481120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661478E0000000001030307) Feb 1 04:28:13 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:13 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:13 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:14 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:14 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:15 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55982 DF PROTO=TCP SPT=44164 DPT=9882 SEQ=888582071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661510D0000000001030307) Feb 1 04:28:17 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:28:17 localhost systemd[1]: var-lib-containers-storage-overlay-bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c-merged.mount: Deactivated successfully. Feb 1 04:28:17 localhost systemd[1]: var-lib-containers-storage-overlay-bd91fbbf62f7f0af7c33a117d6552a6678d20821e3759b8b2c7a56c46d8f5a7c-merged.mount: Deactivated successfully. Feb 1 04:28:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47604 DF PROTO=TCP SPT=37270 DPT=9102 SEQ=3306673345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6615F0D0000000001030307) Feb 1 04:28:19 localhost systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully. Feb 1 04:28:19 localhost systemd[1]: var-lib-containers-storage-overlay-14738252526b4ecc3e5658790c785cb46cd573b0c30a58499169cca3263ae65c-merged.mount: Deactivated successfully. Feb 1 04:28:19 localhost systemd[1]: var-lib-containers-storage-overlay-14738252526b4ecc3e5658790c785cb46cd573b0c30a58499169cca3263ae65c-merged.mount: Deactivated successfully. Feb 1 04:28:21 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:21 localhost systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully. Feb 1 04:28:21 localhost systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully. Feb 1 04:28:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58715 DF PROTO=TCP SPT=49182 DPT=9100 SEQ=2712581907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6616B0E0000000001030307) Feb 1 04:28:22 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:22 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:22 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:23 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:23 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:23 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:24 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:24 localhost sshd[240516]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:28:24 localhost podman[240518]: 2026-02-01 09:28:24.631901968 +0000 UTC m=+0.089245340 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:28:24 localhost podman[240518]: 2026-02-01 09:28:24.644605576 +0000 UTC m=+0.101948948 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:28:24 localhost podman[240518]: unhealthy Feb 1 04:28:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53485 DF PROTO=TCP SPT=55396 DPT=9101 SEQ=2203481120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661770D0000000001030307) Feb 1 04:28:26 localhost systemd[1]: var-lib-containers-storage-overlay-14738252526b4ecc3e5658790c785cb46cd573b0c30a58499169cca3263ae65c-merged.mount: Deactivated successfully. Feb 1 04:28:26 localhost systemd[1]: var-lib-containers-storage-overlay-ef459e28ad8635c7a92e994211ce7b874f14e5a38aca9f947ab317c65716a008-merged.mount: Deactivated successfully. Feb 1 04:28:26 localhost systemd[1]: var-lib-containers-storage-overlay-ef459e28ad8635c7a92e994211ce7b874f14e5a38aca9f947ab317c65716a008-merged.mount: Deactivated successfully. Feb 1 04:28:26 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:28:26 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Failed with result 'exit-code'. Feb 1 04:28:27 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 1 04:28:27 localhost systemd[1]: var-lib-containers-storage-overlay-2830780bc6d16943969f9158fd5036df60ccc26823e26fa259ba0accaa537c16-merged.mount: Deactivated successfully. Feb 1 04:28:27 localhost systemd[1]: var-lib-containers-storage-overlay-2830780bc6d16943969f9158fd5036df60ccc26823e26fa259ba0accaa537c16-merged.mount: Deactivated successfully. Feb 1 04:28:29 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:29 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 1 04:28:29 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 1 04:28:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29408 DF PROTO=TCP SPT=34674 DPT=9882 SEQ=141345313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661899D0000000001030307) Feb 1 04:28:30 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:28:30 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:30 localhost podman[240542]: 2026-02-01 09:28:30.653372797 +0000 UTC m=+0.086748094 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:28:30 localhost podman[240542]: 2026-02-01 09:28:30.713114825 +0000 UTC m=+0.146490122 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller) Feb 1 04:28:30 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:28:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29409 DF PROTO=TCP SPT=34674 DPT=9882 SEQ=141345313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6618D8D0000000001030307) Feb 1 04:28:31 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:31 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:31 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:32 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29410 DF PROTO=TCP SPT=34674 DPT=9882 SEQ=141345313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661958F0000000001030307) Feb 1 04:28:33 localhost systemd[1]: var-lib-containers-storage-overlay-2830780bc6d16943969f9158fd5036df60ccc26823e26fa259ba0accaa537c16-merged.mount: Deactivated successfully. Feb 1 04:28:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:28:33 localhost systemd[1]: var-lib-containers-storage-overlay-d8c1697d3f9451811eabeba845d2774ca9523a4c1f6255791f262d42dbea547b-merged.mount: Deactivated successfully. Feb 1 04:28:33 localhost podman[240567]: 2026-02-01 09:28:33.398837249 +0000 UTC m=+0.066569497 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:28:33 localhost podman[240567]: 2026-02-01 09:28:33.411750264 +0000 UTC m=+0.079482582 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:28:33 localhost systemd[1]: var-lib-containers-storage-overlay-d8c1697d3f9451811eabeba845d2774ca9523a4c1f6255791f262d42dbea547b-merged.mount: Deactivated successfully. Feb 1 04:28:33 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:28:35 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:28:35 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:28:35 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:28:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:28:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56847 DF PROTO=TCP SPT=58622 DPT=9102 SEQ=4015639268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661A34D0000000001030307) Feb 1 04:28:36 localhost podman[240590]: 2026-02-01 09:28:36.618280146 +0000 UTC m=+0.082902996 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 1 04:28:36 localhost podman[240590]: 2026-02-01 09:28:36.626660662 +0000 UTC m=+0.091283342 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:28:37 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:28:37 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:28:37 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:28:38 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:28:38 localhost podman[240607]: 2026-02-01 09:28:38.073241122 +0000 UTC m=+0.423084010 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 04:28:38 localhost podman[240607]: 2026-02-01 09:28:38.086025122 +0000 UTC m=+0.435868020 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.7, release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, distribution-scope=public, build-date=2026-01-22T05:09:47Z, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter) Feb 1 04:28:38 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:38 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:39 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:28:39 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:28:39 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61270 DF PROTO=TCP SPT=55446 DPT=9100 SEQ=332900147 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661AFCD0000000001030307) Feb 1 04:28:40 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:28:41.745 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:28:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:28:41.746 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:28:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:28:41.746 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:28:42 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:28:42 localhost systemd[1]: var-lib-containers-storage-overlay-457ad0364d778031a8ec3d2148346ff43fcb0296666a44f62af8c388a01d2e64-merged.mount: Deactivated successfully. Feb 1 04:28:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17949 DF PROTO=TCP SPT=34894 DPT=9101 SEQ=1299868485 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661BCCD0000000001030307) Feb 1 04:28:43 localhost systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully. Feb 1 04:28:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:28:43 localhost systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully. Feb 1 04:28:43 localhost systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully. Feb 1 04:28:43 localhost podman[240628]: 2026-02-01 09:28:43.367160331 +0000 UTC m=+0.076303265 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:28:43 localhost podman[240628]: 2026-02-01 09:28:43.374755942 +0000 UTC m=+0.083898896 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:28:43 localhost podman[240628]: unhealthy Feb 1 04:28:43 localhost sshd[240646]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:28:43 localhost systemd-logind[761]: New session 56 of user zuul. Feb 1 04:28:43 localhost systemd[1]: Started Session 56 of User zuul. Feb 1 04:28:44 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:28:44 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Failed with result 'exit-code'. Feb 1 04:28:44 localhost systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully. Feb 1 04:28:44 localhost python3.9[240742]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman Feb 1 04:28:44 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:44 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:44 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29412 DF PROTO=TCP SPT=34674 DPT=9882 SEQ=141345313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661C50D0000000001030307) Feb 1 04:28:45 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:46 localhost systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully. Feb 1 04:28:46 localhost systemd[1]: var-lib-containers-storage-overlay-5f62902336a91aed0e6d89cda1611500b3d6fe7b4bddf84b8ce31199c37cfaf6-merged.mount: Deactivated successfully. Feb 1 04:28:47 localhost python3.9[240865]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:28:47 localhost systemd[1]: Started libpod-conmon-c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.scope. Feb 1 04:28:47 localhost podman[240866]: 2026-02-01 09:28:47.179169079 +0000 UTC m=+0.098183754 container exec c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:28:47 localhost podman[240866]: 2026-02-01 09:28:47.183624466 +0000 UTC m=+0.102639201 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller) Feb 1 04:28:48 localhost systemd[1]: var-lib-containers-storage-overlay-f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992-merged.mount: Deactivated successfully. Feb 1 04:28:48 localhost systemd[1]: var-lib-containers-storage-overlay-426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424-merged.mount: Deactivated successfully. Feb 1 04:28:48 localhost systemd[1]: var-lib-containers-storage-overlay-426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424-merged.mount: Deactivated successfully. Feb 1 04:28:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56849 DF PROTO=TCP SPT=58622 DPT=9102 SEQ=4015639268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661D30D0000000001030307) Feb 1 04:28:49 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 1 04:28:49 localhost systemd[1]: var-lib-containers-storage-overlay-f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992-merged.mount: Deactivated successfully. Feb 1 04:28:49 localhost python3.9[241003]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:28:50 localhost systemd[1]: var-lib-containers-storage-overlay-f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992-merged.mount: Deactivated successfully. Feb 1 04:28:50 localhost systemd[1]: libpod-conmon-c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.scope: Deactivated successfully. Feb 1 04:28:50 localhost systemd[1]: Started libpod-conmon-c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.scope. Feb 1 04:28:50 localhost podman[241004]: 2026-02-01 09:28:50.218825858 +0000 UTC m=+0.289163654 container exec c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible) Feb 1 04:28:50 localhost podman[241004]: 2026-02-01 09:28:50.250676563 +0000 UTC m=+0.321014349 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller) Feb 1 04:28:50 localhost systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully. Feb 1 04:28:50 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 1 04:28:51 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 1 04:28:51 localhost systemd[1]: libpod-conmon-c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.scope: Deactivated successfully. Feb 1 04:28:51 localhost python3.9[241144]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:28:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61272 DF PROTO=TCP SPT=55446 DPT=9100 SEQ=332900147 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661DF0D0000000001030307) Feb 1 04:28:52 localhost python3.9[241254]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman Feb 1 04:28:53 localhost systemd[1]: var-lib-containers-storage-overlay-426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424-merged.mount: Deactivated successfully. Feb 1 04:28:54 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:54 localhost systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully. Feb 1 04:28:55 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:55 localhost python3.9[241377]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:28:55 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:55 localhost systemd[1]: Started libpod-conmon-412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.scope. Feb 1 04:28:55 localhost podman[241378]: 2026-02-01 09:28:55.233588259 +0000 UTC m=+0.092504980 container exec 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:28:55 localhost podman[241378]: 2026-02-01 09:28:55.263080472 +0000 UTC m=+0.121997183 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 1 04:28:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17951 DF PROTO=TCP SPT=34894 DPT=9101 SEQ=1299868485 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661ED0E0000000001030307) Feb 1 04:28:55 localhost systemd[1]: libpod-conmon-412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.scope: Deactivated successfully. Feb 1 04:28:56 localhost python3.9[241516]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:28:56 localhost systemd[1]: Started libpod-conmon-412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.scope. Feb 1 04:28:56 localhost podman[241517]: 2026-02-01 09:28:56.149367436 +0000 UTC m=+0.130211223 container exec 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:28:56 localhost podman[241517]: 2026-02-01 09:28:56.178302891 +0000 UTC m=+0.159146618 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:28:56 localhost systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully. Feb 1 04:28:56 localhost systemd[1]: var-lib-containers-storage-overlay-1783ac4e59af83bfa6c705cb913a4e3f5e5d835b34fd8ada82ce7a661d9e5a58-merged.mount: Deactivated successfully. Feb 1 04:28:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:28:57 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:57 localhost systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully. Feb 1 04:28:57 localhost systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully. Feb 1 04:28:57 localhost systemd[1]: libpod-conmon-412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.scope: Deactivated successfully. Feb 1 04:28:57 localhost podman[241564]: 2026-02-01 09:28:57.136445523 +0000 UTC m=+0.603324752 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:28:57 localhost python3.9[241667]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:28:57 localhost podman[241564]: 2026-02-01 09:28:57.172395673 +0000 UTC m=+0.639274842 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:28:57 localhost podman[241564]: unhealthy Feb 1 04:28:57 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:28:57 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:57 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:28:57 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:28:57 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Failed with result 'exit-code'. Feb 1 04:28:57 localhost python3.9[241787]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman Feb 1 04:28:57 localhost nova_compute[225585]: 2026-02-01 09:28:57.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:57 localhost nova_compute[225585]: 2026-02-01 09:28:57.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 1 04:28:58 localhost nova_compute[225585]: 2026-02-01 09:28:58.016 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 1 04:28:58 localhost nova_compute[225585]: 2026-02-01 09:28:58.017 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:58 localhost nova_compute[225585]: 2026-02-01 09:28:58.017 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 1 04:28:58 localhost nova_compute[225585]: 2026-02-01 09:28:58.045 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:58 localhost systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully. Feb 1 04:28:58 localhost systemd[1]: var-lib-containers-storage-overlay-f747231ffc56e15c128dac75ec633f161eee676530b28d17cb7b8d0be7728054-merged.mount: Deactivated successfully. Feb 1 04:28:59 localhost nova_compute[225585]: 2026-02-01 09:28:59.060 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:28:59 localhost python3.9[241910]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:28:59 localhost systemd[1]: Started libpod-conmon-3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.scope. Feb 1 04:28:59 localhost podman[241911]: 2026-02-01 09:28:59.719637482 +0000 UTC m=+0.090611112 container exec 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:28:59 localhost podman[241911]: 2026-02-01 09:28:59.752786666 +0000 UTC m=+0.123760326 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ceilometer_agent_compute) Feb 1 04:28:59 localhost nova_compute[225585]: 2026-02-01 09:28:59.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:29:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61411 DF PROTO=TCP SPT=51370 DPT=9882 SEQ=1196771716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA661FECD0000000001030307) Feb 1 04:29:00 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 1 04:29:00 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 1 04:29:00 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 1 04:29:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:29:00 localhost nova_compute[225585]: 2026-02-01 09:29:00.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:29:00 localhost nova_compute[225585]: 2026-02-01 09:29:00.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:29:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61412 DF PROTO=TCP SPT=51370 DPT=9882 SEQ=1196771716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66202CD0000000001030307) Feb 1 04:29:01 localhost python3.9[242060]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:29:01 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:01 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 1 04:29:02 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 1 04:29:02 localhost systemd[1]: libpod-conmon-3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.scope: Deactivated successfully. Feb 1 04:29:02 localhost podman[241995]: 2026-02-01 09:29:02.13602011 +0000 UTC m=+1.341815947 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller) Feb 1 04:29:02 localhost podman[241995]: 2026-02-01 09:29:02.182642876 +0000 UTC m=+1.388438653 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller) Feb 1 04:29:02 localhost systemd[1]: Started libpod-conmon-3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.scope. Feb 1 04:29:02 localhost podman[242061]: 2026-02-01 09:29:02.22200156 +0000 UTC m=+0.947288461 container exec 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 1 04:29:02 localhost podman[242061]: 2026-02-01 09:29:02.255860915 +0000 UTC m=+0.981147846 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 1 04:29:02 localhost nova_compute[225585]: 2026-02-01 09:29:02.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:29:02 localhost nova_compute[225585]: 2026-02-01 09:29:02.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:29:02 localhost nova_compute[225585]: 2026-02-01 09:29:02.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.015 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.015 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.015 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.016 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.016 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.057 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.057 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.058 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.058 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.058 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:29:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61413 DF PROTO=TCP SPT=51370 DPT=9882 SEQ=1196771716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6620ACE0000000001030307) Feb 1 04:29:03 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:03 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:03 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:03 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.398 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.399 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.400 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.401 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:29:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.496 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.655 225589 WARNING nova.virt.libvirt.driver [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.656 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=13137MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.656 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.656 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:29:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.795 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.796 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.879 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 04:29:03 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:03 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.975 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.975 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:29:03 localhost nova_compute[225585]: 2026-02-01 09:29:03.990 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 04:29:04 localhost nova_compute[225585]: 2026-02-01 09:29:04.011 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 04:29:04 localhost nova_compute[225585]: 2026-02-01 09:29:04.042 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:29:04 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:04 localhost systemd[1]: libpod-conmon-3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.scope: Deactivated successfully. Feb 1 04:29:04 localhost podman[242212]: 2026-02-01 09:29:04.103038366 +0000 UTC m=+0.317179232 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:29:04 localhost podman[242212]: 2026-02-01 09:29:04.141653116 +0000 UTC m=+0.355793972 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:29:04 localhost python3.9[242244]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:04 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:29:04 localhost nova_compute[225585]: 2026-02-01 09:29:04.501 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:29:04 localhost nova_compute[225585]: 2026-02-01 09:29:04.505 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:29:04 localhost nova_compute[225585]: 2026-02-01 09:29:04.528 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:29:04 localhost nova_compute[225585]: 2026-02-01 09:29:04.530 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:29:04 localhost nova_compute[225585]: 2026-02-01 09:29:04.531 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:29:05 localhost python3.9[242388]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman Feb 1 04:29:05 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 1 04:29:06 localhost systemd[1]: var-lib-containers-storage-overlay-19867aa9ce07feb42ab4d071eed0ec581b8be5de4a737b08d8913c4970e7b3a5-merged.mount: Deactivated successfully. Feb 1 04:29:06 localhost nova_compute[225585]: 2026-02-01 09:29:06.526 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:29:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64187 DF PROTO=TCP SPT=38200 DPT=9102 SEQ=1392793442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662184D0000000001030307) Feb 1 04:29:08 localhost systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully. Feb 1 04:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:29:08 localhost systemd[1]: var-lib-containers-storage-overlay-40d13af751dd0e47fc8bb889a91a6d655bc2617bd5ab127ac97d8b2c392f6c58-merged.mount: Deactivated successfully. Feb 1 04:29:08 localhost podman[242401]: 2026-02-01 09:29:08.692897363 +0000 UTC m=+0.330400076 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 1 04:29:08 localhost podman[242401]: 2026-02-01 09:29:08.69835257 +0000 UTC m=+0.335855293 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:29:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:29:09 localhost python3.9[242528]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:29:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18567 DF PROTO=TCP SPT=35012 DPT=9100 SEQ=3267543071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66224CD0000000001030307) Feb 1 04:29:10 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:10 localhost systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully. Feb 1 04:29:10 localhost systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully. Feb 1 04:29:10 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:29:10 localhost podman[242527]: 2026-02-01 09:29:10.82609069 +0000 UTC m=+1.538691057 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=9.7, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-22T05:09:47Z) Feb 1 04:29:10 localhost podman[242527]: 2026-02-01 09:29:10.836087196 +0000 UTC m=+1.548687593 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, config_id=openstack_network_exporter, release=1769056855, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 04:29:10 localhost systemd[1]: Started libpod-conmon-c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.scope. Feb 1 04:29:10 localhost podman[242540]: 2026-02-01 09:29:10.924279392 +0000 UTC m=+1.390896847 container exec c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:29:10 localhost podman[242540]: 2026-02-01 09:29:10.957757936 +0000 UTC m=+1.424375431 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:29:11 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:11 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:12 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:12 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:29:12 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:12 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:12 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:12 localhost systemd[1]: libpod-conmon-c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.scope: Deactivated successfully. Feb 1 04:29:13 localhost python3.9[242687]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:29:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19714 DF PROTO=TCP SPT=49910 DPT=9101 SEQ=4115187361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662320D0000000001030307) Feb 1 04:29:13 localhost systemd[1]: Started libpod-conmon-c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.scope. Feb 1 04:29:13 localhost podman[242688]: 2026-02-01 09:29:13.191019895 +0000 UTC m=+0.124809509 container exec c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:29:13 localhost podman[242688]: 2026-02-01 09:29:13.221869268 +0000 UTC m=+0.155658842 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:29:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:29:15 localhost systemd[1]: var-lib-containers-storage-overlay-40d13af751dd0e47fc8bb889a91a6d655bc2617bd5ab127ac97d8b2c392f6c58-merged.mount: Deactivated successfully. Feb 1 04:29:15 localhost systemd[1]: var-lib-containers-storage-overlay-877c65e867b205f11a32fcdb99f229d7cc1aad0815e744014cf57490bce97673-merged.mount: Deactivated successfully. Feb 1 04:29:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61415 DF PROTO=TCP SPT=51370 DPT=9882 SEQ=1196771716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6623B0E0000000001030307) Feb 1 04:29:15 localhost systemd[1]: var-lib-containers-storage-overlay-877c65e867b205f11a32fcdb99f229d7cc1aad0815e744014cf57490bce97673-merged.mount: Deactivated successfully. Feb 1 04:29:15 localhost podman[242717]: 2026-02-01 09:29:15.571812843 +0000 UTC m=+0.782459000 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Feb 1 04:29:15 localhost podman[242717]: 2026-02-01 09:29:15.610696663 +0000 UTC m=+0.821342770 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:29:17 localhost python3.9[242841]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:17 localhost python3.9[242951]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman Feb 1 04:29:17 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:29:17 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:29:18 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:29:18 localhost systemd[1]: libpod-conmon-c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.scope: Deactivated successfully. Feb 1 04:29:18 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:29:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64189 DF PROTO=TCP SPT=38200 DPT=9102 SEQ=1392793442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662490D0000000001030307) Feb 1 04:29:19 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:20 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:29:20 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:29:21 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:21 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:21 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:22 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18569 DF PROTO=TCP SPT=35012 DPT=9100 SEQ=3267543071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662550D0000000001030307) Feb 1 04:29:22 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:22 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:22 localhost python3.9[243076]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:29:22 localhost systemd[1]: Started libpod-conmon-a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.scope. Feb 1 04:29:22 localhost podman[243077]: 2026-02-01 09:29:22.302877553 +0000 UTC m=+0.100838275 container exec a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:29:22 localhost podman[243077]: 2026-02-01 09:29:22.331575201 +0000 UTC m=+0.129535883 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:29:23 localhost python3.9[243217]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:29:24 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:29:25 localhost systemd[1]: var-lib-containers-storage-overlay-7b9f50aed1094cdf3c8ae90862135d9821bbb7f673296f42b1c4d115dfdd346a-merged.mount: Deactivated successfully. Feb 1 04:29:25 localhost systemd[1]: libpod-conmon-a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.scope: Deactivated successfully. Feb 1 04:29:25 localhost systemd[1]: Started libpod-conmon-a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.scope. Feb 1 04:29:25 localhost podman[243218]: 2026-02-01 09:29:25.204526747 +0000 UTC m=+1.906614615 container exec a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:29:25 localhost podman[243218]: 2026-02-01 09:29:25.235595054 +0000 UTC m=+1.937682892 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:29:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19716 DF PROTO=TCP SPT=49910 DPT=9101 SEQ=4115187361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662630E0000000001030307) Feb 1 04:29:25 localhost systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully. Feb 1 04:29:26 localhost systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully. Feb 1 04:29:26 localhost systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully. Feb 1 04:29:26 localhost python3.9[243358]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:26 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:26 localhost systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully. Feb 1 04:29:27 localhost systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully. Feb 1 04:29:27 localhost systemd[1]: libpod-conmon-a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.scope: Deactivated successfully. Feb 1 04:29:27 localhost python3.9[243468]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman Feb 1 04:29:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:29:27 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:28 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:28 localhost podman[243482]: 2026-02-01 09:29:28.097405308 +0000 UTC m=+0.170484670 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:29:28 localhost podman[243482]: 2026-02-01 09:29:28.135677935 +0000 UTC m=+0.208757257 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:29:28 localhost podman[243482]: unhealthy Feb 1 04:29:28 localhost python3.9[243615]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:29:28 localhost systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully. Feb 1 04:29:28 localhost systemd[1]: var-lib-containers-storage-overlay-d96180a36bb10b52574296fc744e208425bb78036eb13d53db69ed84f3ab806e-merged.mount: Deactivated successfully. Feb 1 04:29:28 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:29:28 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Failed with result 'exit-code'. Feb 1 04:29:28 localhost systemd[1]: Started libpod-conmon-1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.scope. Feb 1 04:29:28 localhost podman[243616]: 2026-02-01 09:29:28.962557522 +0000 UTC m=+0.184996774 container exec 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, container_name=openstack_network_exporter) Feb 1 04:29:28 localhost podman[243616]: 2026-02-01 09:29:28.966590594 +0000 UTC m=+0.189029816 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7) Feb 1 04:29:29 localhost systemd[1]: libpod-conmon-1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.scope: Deactivated successfully. Feb 1 04:29:29 localhost python3.9[243758]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Feb 1 04:29:29 localhost systemd[1]: Started libpod-conmon-1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.scope. Feb 1 04:29:29 localhost podman[243759]: 2026-02-01 09:29:29.775092591 +0000 UTC m=+0.097632219 container exec 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1769056855, config_id=openstack_network_exporter) Feb 1 04:29:29 localhost podman[243759]: 2026-02-01 09:29:29.808927033 +0000 UTC m=+0.131466651 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, release=1769056855, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, version=9.7, distribution-scope=public, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 1 04:29:29 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Feb 1 04:29:29 localhost systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully. Feb 1 04:29:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19602 DF PROTO=TCP SPT=54666 DPT=9882 SEQ=1052902866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66273FD0000000001030307) Feb 1 04:29:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19603 DF PROTO=TCP SPT=54666 DPT=9882 SEQ=1052902866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66278100000000001030307) Feb 1 04:29:31 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:29:31 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:29:31 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:29:32 localhost python3.9[243897]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19604 DF PROTO=TCP SPT=54666 DPT=9882 SEQ=1052902866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662800D0000000001030307) Feb 1 04:29:33 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:33 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:29:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:29:33 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:29:33 localhost systemd[1]: libpod-conmon-1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.scope: Deactivated successfully. Feb 1 04:29:33 localhost podman[243915]: 2026-02-01 09:29:33.58969657 +0000 UTC m=+0.221159315 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 1 04:29:33 localhost podman[243915]: 2026-02-01 09:29:33.663652226 +0000 UTC m=+0.295114971 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 1 04:29:34 localhost python3.9[244032]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:34 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:34 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:29:34 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:34 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:29:34 localhost podman[244050]: 2026-02-01 09:29:34.656009749 +0000 UTC m=+0.118160635 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:29:34 localhost podman[244050]: 2026-02-01 09:29:34.668612593 +0000 UTC m=+0.130763479 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:29:35 localhost python3.9[244163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:29:35 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:29:35 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:35 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:35 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:36 localhost python3.9[244251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769938174.614794-3717-202549061113124/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65415 DF PROTO=TCP SPT=50262 DPT=9102 SEQ=3382619436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6628D8E0000000001030307) Feb 1 04:29:37 localhost python3.9[244361]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:37 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:29:37 localhost systemd[1]: var-lib-containers-storage-overlay-abe2e37cef3553dd7ed72567236ba15185ae0f96cf280ad9def2a9cdb2b0b4c7-merged.mount: Deactivated successfully. Feb 1 04:29:38 localhost python3.9[244471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:29:38 localhost python3.9[244528]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:39 localhost python3.9[244638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:29:39 localhost python3.9[244695]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.id90h_1_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31015 DF PROTO=TCP SPT=52738 DPT=9100 SEQ=3196346975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6629A0D0000000001030307) Feb 1 04:29:40 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:29:40 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:29:40 localhost python3.9[244805]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:29:40 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:29:40 localhost python3.9[244862]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:29:41 localhost podman[244880]: 2026-02-01 09:29:41.365630285 +0000 UTC m=+0.078692271 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Feb 1 04:29:41 localhost podman[244880]: 2026-02-01 09:29:41.370405541 +0000 UTC m=+0.083467497 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:29:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:29:41.747 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:29:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:29:41.747 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:29:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:29:41.747 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:29:41 localhost sshd[244990]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:29:41 localhost python3.9[244989]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:29:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19717 DF PROTO=TCP SPT=49910 DPT=9101 SEQ=4115187361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662A30D0000000001030307) Feb 1 04:29:42 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:29:42 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:29:42 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 1 04:29:42 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:29:42 localhost podman[245030]: 2026-02-01 09:29:42.689661463 +0000 UTC m=+0.442414803 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, maintainer=Red Hat, Inc., release=1769056855, version=9.7, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 04:29:42 localhost podman[245030]: 2026-02-01 09:29:42.702401351 +0000 UTC m=+0.455154651 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, version=9.7, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1769056855, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Feb 1 04:29:42 localhost python3[245114]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 1 04:29:43 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:43 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:43 localhost python3.9[245232]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:29:43 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:43 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:29:44 localhost python3.9[245289]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:44 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:44 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:44 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:44 localhost python3.9[245399]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:29:45 localhost python3.9[245456]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19606 DF PROTO=TCP SPT=54666 DPT=9882 SEQ=1052902866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662B10E0000000001030307) Feb 1 04:29:46 localhost python3.9[245566]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:29:47 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 1 04:29:47 localhost systemd[1]: var-lib-containers-storage-overlay-892d1779a7f946097f73616f672cd69c2781ff491e090964134e591e5adb1a86-merged.mount: Deactivated successfully. Feb 1 04:29:47 localhost python3.9[245623]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:47 localhost systemd[1]: var-lib-containers-storage-overlay-892d1779a7f946097f73616f672cd69c2781ff491e090964134e591e5adb1a86-merged.mount: Deactivated successfully. Feb 1 04:29:48 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:48 localhost systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully. Feb 1 04:29:48 localhost systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully. Feb 1 04:29:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65417 DF PROTO=TCP SPT=50262 DPT=9102 SEQ=3382619436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662BD0D0000000001030307) Feb 1 04:29:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:29:48 localhost podman[245641]: 2026-02-01 09:29:48.862450727 +0000 UTC m=+0.086036805 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:29:48 localhost podman[245641]: 2026-02-01 09:29:48.875606609 +0000 UTC m=+0.099192737 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 04:29:49 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:49 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:49 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:29:49 localhost python3.9[245752]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:29:49 localhost systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully. Feb 1 04:29:49 localhost python3.9[245809]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:50 localhost systemd[1]: var-lib-containers-storage-overlay-ac18d148f1ccb0eaa519a008e32625aabf00d458250cb02e5015187c1942ecc7-merged.mount: Deactivated successfully. Feb 1 04:29:50 localhost systemd[1]: var-lib-containers-storage-overlay-5f30d5cd30916d88e24f21a5c8313738088a285d6d2d0efec09cc705e86eb786-merged.mount: Deactivated successfully. Feb 1 04:29:50 localhost systemd[1]: var-lib-containers-storage-overlay-ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2-merged.mount: Deactivated successfully. Feb 1 04:29:50 localhost python3.9[245920]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:29:51 localhost systemd[1]: var-lib-containers-storage-overlay-5f30d5cd30916d88e24f21a5c8313738088a285d6d2d0efec09cc705e86eb786-merged.mount: Deactivated successfully. Feb 1 04:29:51 localhost python3.9[246010]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769938190.1951253-4092-161351601652820/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:51 localhost systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully. Feb 1 04:29:52 localhost systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully. Feb 1 04:29:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31017 DF PROTO=TCP SPT=52738 DPT=9100 SEQ=3196346975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662CB0D0000000001030307) Feb 1 04:29:52 localhost python3.9[246120]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:52 localhost systemd[1]: var-lib-containers-storage-overlay-ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2-merged.mount: Deactivated successfully. Feb 1 04:29:52 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully. Feb 1 04:29:53 localhost python3.9[246230]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:29:53 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Feb 1 04:29:54 localhost python3.9[246343]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:54 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 1 04:29:54 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 1 04:29:54 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 1 04:29:54 localhost python3.9[246453]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:29:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7184 DF PROTO=TCP SPT=47710 DPT=9101 SEQ=1485413952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662D70D0000000001030307) Feb 1 04:29:55 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:55 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 1 04:29:55 localhost python3.9[246564]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:29:55 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 1 04:29:56 localhost python3.9[246676]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:29:56 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:56 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:57 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 1 04:29:57 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:29:57 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:58 localhost python3.9[246789]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:29:58 localhost systemd[1]: session-56.scope: Deactivated successfully. Feb 1 04:29:58 localhost systemd[1]: session-56.scope: Consumed 26.776s CPU time. Feb 1 04:29:58 localhost systemd-logind[761]: Session 56 logged out. Waiting for processes to exit. Feb 1 04:29:58 localhost systemd-logind[761]: Removed session 56. Feb 1 04:29:58 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:29:59 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 1 04:29:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:29:59 localhost systemd[1]: var-lib-containers-storage-overlay-e3a7790e7cad798695025ef44722873ac2669462e661d130061be9d691861f40-merged.mount: Deactivated successfully. Feb 1 04:29:59 localhost podman[246807]: 2026-02-01 09:29:59.646350293 +0000 UTC m=+0.093933517 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:29:59 localhost podman[246807]: 2026-02-01 09:29:59.655834441 +0000 UTC m=+0.103417695 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:29:59 localhost podman[246807]: unhealthy Feb 1 04:29:59 localhost systemd[1]: tmp-crun.4uaFHR.mount: Deactivated successfully. Feb 1 04:29:59 localhost nova_compute[225585]: 2026-02-01 09:29:59.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:00 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:30:00 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 1 04:30:00 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 1 04:30:00 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Main process exited, code=exited, status=1/FAILURE Feb 1 04:30:00 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Failed with result 'exit-code'. Feb 1 04:30:00 localhost nova_compute[225585]: 2026-02-01 09:30:00.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:01 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:30:01 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:30:01 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:30:01 localhost openstack_network_exporter[239388]: ERROR 09:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:30:01 localhost openstack_network_exporter[239388]: Feb 1 04:30:01 localhost openstack_network_exporter[239388]: ERROR 09:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:30:01 localhost openstack_network_exporter[239388]: Feb 1 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 1 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay-4fd9ea2ebfbeb4119560e74e5b0456fd618118c9f72a7ecf288a55a3e1a95413-merged.mount: Deactivated successfully. Feb 1 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay-4fd9ea2ebfbeb4119560e74e5b0456fd618118c9f72a7ecf288a55a3e1a95413-merged.mount: Deactivated successfully. Feb 1 04:30:02 localhost nova_compute[225585]: 2026-02-01 09:30:02.991 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:03 localhost nova_compute[225585]: 2026-02-01 09:30:03.016 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:03 localhost nova_compute[225585]: 2026-02-01 09:30:03.016 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:03 localhost nova_compute[225585]: 2026-02-01 09:30:03.017 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 1 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 1 04:30:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19910 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1565514684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662F6C00000000001030307) Feb 1 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 1 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 1 04:30:03 localhost nova_compute[225585]: 2026-02-01 09:30:03.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:03 localhost nova_compute[225585]: 2026-02-01 09:30:03.996 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:30:03 localhost nova_compute[225585]: 2026-02-01 09:30:03.996 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:30:04 localhost nova_compute[225585]: 2026-02-01 09:30:04.009 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:30:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19911 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1565514684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662FACD0000000001030307) Feb 1 04:30:04 localhost sshd[246835]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:30:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:30:04 localhost systemd-logind[761]: New session 57 of user zuul. Feb 1 04:30:04 localhost systemd[1]: Started Session 57 of User zuul. Feb 1 04:30:04 localhost podman[246837]: 2026-02-01 09:30:04.781486973 +0000 UTC m=+0.082927720 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible) Feb 1 04:30:04 localhost podman[246837]: 2026-02-01 09:30:04.842845075 +0000 UTC m=+0.144285842 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:30:04 localhost nova_compute[225585]: 2026-02-01 09:30:04.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:04 localhost nova_compute[225585]: 2026-02-01 09:30:04.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:04 localhost nova_compute[225585]: 2026-02-01 09:30:04.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.022 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.023 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.023 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.023 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.024 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:30:05 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 1 04:30:05 localhost systemd[1]: var-lib-containers-storage-overlay-8f493ed320f2136eba98c6f6d73d7580e3273443b9599c34d1438e87453daf45-merged.mount: Deactivated successfully. Feb 1 04:30:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65418 DF PROTO=TCP SPT=50262 DPT=9102 SEQ=3382619436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA662FD0D0000000001030307) Feb 1 04:30:05 localhost podman[236852]: @ - - [01/Feb/2026:09:25:43 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 140477 "" "Go-http-client/1.1" Feb 1 04:30:05 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:30:05 localhost podman_exporter[236841]: ts=2026-02-01T09:30:05.156Z caller=exporter.go:96 level=info msg="Listening on" address=:9882 Feb 1 04:30:05 localhost podman_exporter[236841]: ts=2026-02-01T09:30:05.157Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882 Feb 1 04:30:05 localhost podman_exporter[236841]: ts=2026-02-01T09:30:05.157Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882 Feb 1 04:30:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:30:05 localhost podman[246993]: 2026-02-01 09:30:05.487123653 +0000 UTC m=+0.077586628 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:30:05 localhost podman[246993]: 2026-02-01 09:30:05.527740171 +0000 UTC m=+0.118203176 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.530 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:30:05 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:30:05 localhost python3.9[246992]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config/container-startup-config/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.667 225589 WARNING nova.virt.libvirt.driver [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.669 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=13098MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.669 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.669 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.734 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.735 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:30:05 localhost nova_compute[225585]: 2026-02-01 09:30:05.767 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:30:06 localhost nova_compute[225585]: 2026-02-01 09:30:06.235 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:30:06 localhost nova_compute[225585]: 2026-02-01 09:30:06.277 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:30:06 localhost nova_compute[225585]: 2026-02-01 09:30:06.293 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:30:06 localhost nova_compute[225585]: 2026-02-01 09:30:06.296 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:30:06 localhost nova_compute[225585]: 2026-02-01 09:30:06.296 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:30:06 localhost python3.9[247146]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19912 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1565514684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66302CD0000000001030307) Feb 1 04:30:06 localhost python3.9[247258]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:07 localhost nova_compute[225585]: 2026-02-01 09:30:07.292 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:30:07 localhost python3.9[247366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:08 localhost python3.9[247452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938207.243748-101-236050600101142/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:09 localhost python3.9[247560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:10 localhost python3.9[247646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938208.7806091-101-100936056147069/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19913 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1565514684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663128E0000000001030307) Feb 1 04:30:11 localhost python3.9[247754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:12 localhost python3.9[247840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938210.6498568-101-249700732951954/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=0711e0aa3ee7c85c85c3e1039f4da2e49344129d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:30:12 localhost podman[247858]: 2026-02-01 09:30:12.864413149 +0000 UTC m=+0.077658510 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent) Feb 1 04:30:12 localhost podman[247858]: 2026-02-01 09:30:12.900657014 +0000 UTC m=+0.113902365 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:30:12 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:30:14 localhost python3.9[247966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:30:14 localhost systemd[1]: tmp-crun.4AWsoI.mount: Deactivated successfully. Feb 1 04:30:14 localhost podman[248053]: 2026-02-01 09:30:14.869650271 +0000 UTC m=+0.085915541 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.expose-services=, vcs-type=git, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855) Feb 1 04:30:14 localhost python3.9[248052]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938213.8174965-275-272263216462854/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=a74956efcd0a6873aac81fb89a0017e3332e5948 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:14 localhost podman[248053]: 2026-02-01 09:30:14.882587895 +0000 UTC m=+0.098853185 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, release=1769056855, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:30:14 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:30:15 localhost python3.9[248179]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:30:16 localhost python3.9[248291]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:17 localhost python3.9[248401]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:17 localhost python3.9[248458]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:18 localhost python3.9[248568]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:18 localhost python3.9[248625]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19914 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1565514684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663330F0000000001030307) Feb 1 04:30:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:30:19 localhost podman[248736]: 2026-02-01 09:30:19.45942101 +0000 UTC m=+0.069922733 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ceilometer_agent_compute) Feb 1 04:30:19 localhost podman[248736]: 2026-02-01 09:30:19.47352196 +0000 UTC m=+0.084023673 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:30:19 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:30:19 localhost python3.9[248735]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:20 localhost python3.9[248864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:20 localhost python3.9[248921]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:21 localhost python3.9[249032]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:23 localhost python3.9[249107]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:24 localhost python3.9[249217]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:30:24 localhost systemd[1]: Reloading. Feb 1 04:30:24 localhost systemd-sysv-generator[249243]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:30:24 localhost systemd-rc-local-generator[249240]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:25 localhost python3.9[249365]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:26 localhost python3.9[249422]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:26 localhost python3.9[249532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:27 localhost python3.9[249589]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:28 localhost python3.9[249699]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:30:28 localhost systemd[1]: Reloading. Feb 1 04:30:28 localhost systemd-sysv-generator[249728]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:30:28 localhost systemd-rc-local-generator[249723]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:28 localhost systemd[1]: Starting Create netns directory... Feb 1 04:30:28 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 04:30:28 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 04:30:28 localhost systemd[1]: Finished Create netns directory. Feb 1 04:30:29 localhost python3.9[249850]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:30 localhost podman[236852]: time="2026-02-01T09:30:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:30:30 localhost podman[236852]: @ - - [01/Feb/2026:09:30:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142457 "" "Go-http-client/1.1" Feb 1 04:30:30 localhost podman[236852]: @ - - [01/Feb/2026:09:30:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15432 "" "Go-http-client/1.1" Feb 1 04:30:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:30:30 localhost podman[249965]: 2026-02-01 09:30:30.605154127 +0000 UTC m=+0.085405508 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:30:30 localhost podman[249965]: 2026-02-01 09:30:30.617728839 +0000 UTC m=+0.097980190 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:30:30 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:30:30 localhost python3.9[249964]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:30:31 localhost python3.9[250097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:31 localhost openstack_network_exporter[239388]: ERROR 09:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:30:31 localhost openstack_network_exporter[239388]: Feb 1 04:30:31 localhost openstack_network_exporter[239388]: ERROR 09:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:30:31 localhost openstack_network_exporter[239388]: Feb 1 04:30:31 localhost python3.9[250185]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769938230.94414-710-113527581641549/.source.json _original_basename=.hahx198j follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:32 localhost python3.9[250293]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2994 DF PROTO=TCP SPT=58216 DPT=9102 SEQ=319385334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6636BF10000000001030307) Feb 1 04:30:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2995 DF PROTO=TCP SPT=58216 DPT=9102 SEQ=319385334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663700E0000000001030307) Feb 1 04:30:34 localhost python3.9[250597]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False Feb 1 04:30:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19915 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1565514684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663730D0000000001030307) Feb 1 04:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:30:35 localhost podman[250609]: 2026-02-01 09:30:35.867909403 +0000 UTC m=+0.080031658 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:30:35 localhost podman[250612]: 2026-02-01 09:30:35.952881418 +0000 UTC m=+0.162411616 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:30:35 localhost podman[250609]: 2026-02-01 09:30:35.964645506 +0000 UTC m=+0.176767721 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:30:35 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:30:36 localhost podman[250612]: 2026-02-01 09:30:36.015608824 +0000 UTC m=+0.225139012 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:30:36 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:30:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2996 DF PROTO=TCP SPT=58216 DPT=9102 SEQ=319385334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663780D0000000001030307) Feb 1 04:30:36 localhost python3.9[250755]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 1 04:30:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65419 DF PROTO=TCP SPT=50262 DPT=9102 SEQ=3382619436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6637B0E0000000001030307) Feb 1 04:30:39 localhost python3[250865]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json containers=['neutron_sriov_agent'] log_base_path=/var/log/containers/stdouts debug=False Feb 1 04:30:39 localhost podman[250901]: Feb 1 04:30:39 localhost podman[250901]: 2026-02-01 09:30:39.353744954 +0000 UTC m=+0.079904575 container create 521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:30:39 localhost podman[250901]: 2026-02-01 09:30:39.310826414 +0000 UTC m=+0.036986065 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Feb 1 04:30:39 localhost python3[250865]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Feb 1 04:30:39 localhost auditd[727]: Audit daemon rotating log files Feb 1 04:30:40 localhost python3.9[251048]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:30:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2997 DF PROTO=TCP SPT=58216 DPT=9102 SEQ=319385334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66387CD0000000001030307) Feb 1 04:30:41 localhost python3.9[251160]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:41 localhost python3.9[251215]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:30:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:30:41.748 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:30:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:30:41.749 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:30:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:30:41.749 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:30:42 localhost python3.9[251324]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769938241.5803125-944-70358029612621/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:42 localhost python3.9[251379]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:30:42 localhost systemd[1]: Reloading. Feb 1 04:30:42 localhost systemd-sysv-generator[251408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:30:42 localhost systemd-rc-local-generator[251402]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:30:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:30:43 localhost podman[251416]: 2026-02-01 09:30:43.232549522 +0000 UTC m=+0.087845780 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 04:30:43 localhost podman[251416]: 2026-02-01 09:30:43.269827295 +0000 UTC m=+0.125123623 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:30:43 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:30:43 localhost python3.9[251488]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:30:43 localhost systemd[1]: Reloading. Feb 1 04:30:43 localhost systemd-rc-local-generator[251516]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:30:43 localhost systemd-sysv-generator[251519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:30:44 localhost systemd[1]: Starting neutron_sriov_agent container... Feb 1 04:30:44 localhost systemd[1]: Started libcrun container. Feb 1 04:30:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19db9a1c30f9b931e44b1c23ff04fd048a4b5218c0b521ac43c5f273eeaa49c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 1 04:30:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19db9a1c30f9b931e44b1c23ff04fd048a4b5218c0b521ac43c5f273eeaa49c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:30:44 localhost podman[251528]: 2026-02-01 09:30:44.250118591 +0000 UTC m=+0.122930069 container init 521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, managed_by=edpm_ansible) Feb 1 04:30:44 localhost podman[251528]: 2026-02-01 09:30:44.262357093 +0000 UTC m=+0.135168641 container start 521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=neutron_sriov_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 1 04:30:44 localhost podman[251528]: neutron_sriov_agent Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: + sudo -E kolla_set_configs Feb 1 04:30:44 localhost systemd[1]: Started neutron_sriov_agent container. Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Validating config file Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Copying service configuration files Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Writing out command to execute Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: ++ cat /run_command Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: + CMD=/usr/bin/neutron-sriov-nic-agent Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: + ARGS= Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: + sudo kolla_copy_cacerts Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: + [[ ! -n '' ]] Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: + . kolla_extend_start Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: Running command: '/usr/bin/neutron-sriov-nic-agent' Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: + umask 0022 Feb 1 04:30:44 localhost neutron_sriov_agent[251542]: + exec /usr/bin/neutron-sriov-nic-agent Feb 1 04:30:45 localhost python3.9[251664]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 1 04:30:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:30:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 5433 writes, 23K keys, 5433 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5433 writes, 751 syncs, 7.23 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:30:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:30:45 localhost systemd[1]: tmp-crun.Mwqkdy.mount: Deactivated successfully. Feb 1 04:30:45 localhost podman[251682]: 2026-02-01 09:30:45.873340369 +0000 UTC m=+0.090412176 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1769056855, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc.) Feb 1 04:30:45 localhost podman[251682]: 2026-02-01 09:30:45.883576652 +0000 UTC m=+0.100648429 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, io.openshift.tags=minimal rhel9) Feb 1 04:30:45 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.939 2 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.939 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev44#033[00m Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.939 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.939 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.939 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.940 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.940 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005604215.localdomain'}#033[00m Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.940 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-86d6ffca-9bd7-4d35-a743-ea5092aeab08 - - - - - -] RPC agent_id: nic-switch-agent.np0005604215.localdomain#033[00m Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.945 2 INFO neutron.agent.agent_extensions_manager [None req-86d6ffca-9bd7-4d35-a743-ea5092aeab08 - - - - - -] Loaded agent extensions: ['qos']#033[00m Feb 1 04:30:45 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:45.945 2 INFO neutron.agent.agent_extensions_manager [None req-86d6ffca-9bd7-4d35-a743-ea5092aeab08 - - - - - -] Initializing agent extension 'qos'#033[00m Feb 1 04:30:46 localhost python3.9[251796]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:30:46 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:46.462 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-86d6ffca-9bd7-4d35-a743-ea5092aeab08 - - - - - -] Agent initialized successfully, now running... #033[00m Feb 1 04:30:46 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:46.462 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-86d6ffca-9bd7-4d35-a743-ea5092aeab08 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Feb 1 04:30:46 localhost neutron_sriov_agent[251542]: 2026-02-01 09:30:46.463 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-86d6ffca-9bd7-4d35-a743-ea5092aeab08 - - - - - -] Agent out of sync with plugin!#033[00m Feb 1 04:30:47 localhost python3.9[251886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769938245.9698136-1079-74085641582447/.source.yaml _original_basename=.kpzqw_th follow=False checksum=b3cbbb2fba8ac1ae44c39a232429364988d5d801 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:30:47 localhost python3.9[251996]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:30:47 localhost systemd[1]: Stopping neutron_sriov_agent container... Feb 1 04:30:48 localhost systemd[1]: tmp-crun.euaI2v.mount: Deactivated successfully. Feb 1 04:30:48 localhost systemd[1]: libpod-521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891.scope: Deactivated successfully. Feb 1 04:30:48 localhost systemd[1]: libpod-521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891.scope: Consumed 1.771s CPU time. Feb 1 04:30:48 localhost podman[252000]: 2026-02-01 09:30:48.059244116 +0000 UTC m=+0.086092188 container died 521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:30:48 localhost systemd[1]: tmp-crun.AHsFCG.mount: Deactivated successfully. Feb 1 04:30:48 localhost podman[252000]: 2026-02-01 09:30:48.113693778 +0000 UTC m=+0.140541780 container cleanup 521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Feb 1 04:30:48 localhost podman[252000]: neutron_sriov_agent Feb 1 04:30:48 localhost podman[252027]: 2026-02-01 09:30:48.194618082 +0000 UTC m=+0.052932357 container cleanup 521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, org.label-schema.build-date=20260127, tcib_managed=true) Feb 1 04:30:48 localhost podman[252027]: neutron_sriov_agent Feb 1 04:30:48 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Feb 1 04:30:48 localhost systemd[1]: Stopped neutron_sriov_agent container. Feb 1 04:30:48 localhost systemd[1]: Starting neutron_sriov_agent container... Feb 1 04:30:48 localhost systemd[1]: Started libcrun container. Feb 1 04:30:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19db9a1c30f9b931e44b1c23ff04fd048a4b5218c0b521ac43c5f273eeaa49c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 1 04:30:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19db9a1c30f9b931e44b1c23ff04fd048a4b5218c0b521ac43c5f273eeaa49c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:30:48 localhost podman[252039]: 2026-02-01 09:30:48.331269635 +0000 UTC m=+0.103876935 container init 521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127) Feb 1 04:30:48 localhost podman[252039]: 2026-02-01 09:30:48.339844268 +0000 UTC m=+0.112451558 container start 521369efe03bd350bddc08a55e0a279c01928f7a9d9eaef6d6d9292f24ef4891 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-92014ee3c62b3d5f146d1dca0b039f1231ba53c2115cfac0921365576bf44e2c'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 04:30:48 localhost podman[252039]: neutron_sriov_agent Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: + sudo -E kolla_set_configs Feb 1 04:30:48 localhost systemd[1]: Started neutron_sriov_agent container. Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Validating config file Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Copying service configuration files Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Writing out command to execute Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/d91d8a949a4b5272256c667b5094a15f5e397c6793efbfa4186752b765c6923b Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: ++ cat /run_command Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: + CMD=/usr/bin/neutron-sriov-nic-agent Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: + ARGS= Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: + sudo kolla_copy_cacerts Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: + [[ ! -n '' ]] Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: + . kolla_extend_start Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: Running command: '/usr/bin/neutron-sriov-nic-agent' Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: + umask 0022 Feb 1 04:30:48 localhost neutron_sriov_agent[252054]: + exec /usr/bin/neutron-sriov-nic-agent Feb 1 04:30:48 localhost systemd-logind[761]: Session 57 logged out. Waiting for processes to exit. Feb 1 04:30:48 localhost systemd[1]: session-57.scope: Deactivated successfully. Feb 1 04:30:48 localhost systemd[1]: session-57.scope: Consumed 22.557s CPU time. Feb 1 04:30:48 localhost systemd-logind[761]: Removed session 57. Feb 1 04:30:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2998 DF PROTO=TCP SPT=58216 DPT=9102 SEQ=319385334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663A90D0000000001030307) Feb 1 04:30:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:30:49 localhost systemd[1]: tmp-crun.Wxfmhw.mount: Deactivated successfully. Feb 1 04:30:49 localhost podman[252086]: 2026-02-01 09:30:49.874437725 +0000 UTC m=+0.091115417 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible) Feb 1 04:30:49 localhost podman[252086]: 2026-02-01 09:30:49.886682938 +0000 UTC m=+0.103360620 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:30:49 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.009 2 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.009 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev44#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.009 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.009 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.009 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.009 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.010 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005604215.localdomain'}#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.010 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-92b1031d-f3e6-4d59-90ee-0d9c32b77b51 - - - - - -] RPC agent_id: nic-switch-agent.np0005604215.localdomain#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.014 2 INFO neutron.agent.agent_extensions_manager [None req-92b1031d-f3e6-4d59-90ee-0d9c32b77b51 - - - - - -] Loaded agent extensions: ['qos']#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.015 2 INFO neutron.agent.agent_extensions_manager [None req-92b1031d-f3e6-4d59-90ee-0d9c32b77b51 - - - - - -] Initializing agent extension 'qos'#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.193 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-92b1031d-f3e6-4d59-90ee-0d9c32b77b51 - - - - - -] Agent initialized successfully, now running... #033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.194 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-92b1031d-f3e6-4d59-90ee-0d9c32b77b51 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Feb 1 04:30:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:30:50.196 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-92b1031d-f3e6-4d59-90ee-0d9c32b77b51 - - - - - -] Agent out of sync with plugin!#033[00m Feb 1 04:30:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:30:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 5223 writes, 23K keys, 5223 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5223 writes, 658 syncs, 7.94 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:30:54 localhost sshd[252106]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:30:54 localhost systemd-logind[761]: New session 58 of user zuul. Feb 1 04:30:54 localhost systemd[1]: Started Session 58 of User zuul. Feb 1 04:30:55 localhost python3.9[252217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:30:56 localhost python3.9[252331]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:30:58 localhost python3.9[252394]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:31:00 localhost podman[236852]: time="2026-02-01T09:31:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:31:00 localhost podman[236852]: @ - - [01/Feb/2026:09:31:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144458 "" "Go-http-client/1.1" Feb 1 04:31:00 localhost podman[236852]: @ - - [01/Feb/2026:09:31:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15876 "" "Go-http-client/1.1" Feb 1 04:31:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:31:00 localhost systemd[1]: tmp-crun.kIQDMh.mount: Deactivated successfully. Feb 1 04:31:00 localhost podman[252397]: 2026-02-01 09:31:00.885077032 +0000 UTC m=+0.097499586 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:31:00 localhost podman[252397]: 2026-02-01 09:31:00.896718306 +0000 UTC m=+0.109140860 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:31:00 localhost sshd[252421]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:31:00 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:31:00 localhost nova_compute[225585]: 2026-02-01 09:31:00.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:31:00 localhost nova_compute[225585]: 2026-02-01 09:31:00.997 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:31:01 localhost openstack_network_exporter[239388]: ERROR 09:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:31:01 localhost openstack_network_exporter[239388]: Feb 1 04:31:01 localhost openstack_network_exporter[239388]: ERROR 09:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:31:01 localhost openstack_network_exporter[239388]: Feb 1 04:31:02 localhost nova_compute[225585]: 2026-02-01 09:31:02.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:31:02 localhost nova_compute[225585]: 2026-02-01 09:31:02.997 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:31:03 localhost python3.9[252532]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:31:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:31:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57330 DF PROTO=TCP SPT=45564 DPT=9102 SEQ=579589686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663E1210000000001030307) Feb 1 04:31:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57331 DF PROTO=TCP SPT=45564 DPT=9102 SEQ=579589686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663E50D0000000001030307) Feb 1 04:31:04 localhost nova_compute[225585]: 2026-02-01 09:31:04.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:31:04 localhost nova_compute[225585]: 2026-02-01 09:31:04.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:31:05 localhost python3.9[252645]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/container-startup-config setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2999 DF PROTO=TCP SPT=58216 DPT=9102 SEQ=319385334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663E90E0000000001030307) Feb 1 04:31:05 localhost python3.9[252755]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:05 localhost nova_compute[225585]: 2026-02-01 09:31:05.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:31:05 localhost nova_compute[225585]: 2026-02-01 09:31:05.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:31:05 localhost nova_compute[225585]: 2026-02-01 09:31:05.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:31:06 localhost nova_compute[225585]: 2026-02-01 09:31:06.011 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:31:06 localhost nova_compute[225585]: 2026-02-01 09:31:06.011 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:31:06 localhost podman[252867]: 2026-02-01 09:31:06.30671702 +0000 UTC m=+0.085444080 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:31:06 localhost podman[252867]: 2026-02-01 09:31:06.318707294 +0000 UTC m=+0.097434384 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:31:06 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:31:06 localhost podman[252866]: 2026-02-01 09:31:06.414268692 +0000 UTC m=+0.192696232 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 1 04:31:06 localhost python3.9[252865]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:06 localhost podman[252866]: 2026-02-01 09:31:06.486709975 +0000 UTC m=+0.265137555 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 1 04:31:06 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:31:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57332 DF PROTO=TCP SPT=45564 DPT=9102 SEQ=579589686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663ED0D0000000001030307) Feb 1 04:31:06 localhost nova_compute[225585]: 2026-02-01 09:31:06.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:31:06 localhost nova_compute[225585]: 2026-02-01 09:31:06.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.020 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.020 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.021 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.021 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.022 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:31:07 localhost python3.9[253022]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.450 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:31:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19916 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1565514684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663F10D0000000001030307) Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.658 225589 WARNING nova.virt.libvirt.driver [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.659 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=13035MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.660 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.660 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:31:07 localhost python3.9[253154]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.737 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.738 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:31:07 localhost nova_compute[225585]: 2026-02-01 09:31:07.760 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:31:08 localhost nova_compute[225585]: 2026-02-01 09:31:08.185 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:31:08 localhost nova_compute[225585]: 2026-02-01 09:31:08.193 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:31:08 localhost nova_compute[225585]: 2026-02-01 09:31:08.213 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:31:08 localhost nova_compute[225585]: 2026-02-01 09:31:08.216 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:31:08 localhost nova_compute[225585]: 2026-02-01 09:31:08.217 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:31:08 localhost python3.9[253284]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:08 localhost python3.9[253396]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:10 localhost python3.9[253506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/container-startup-config/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57333 DF PROTO=TCP SPT=45564 DPT=9102 SEQ=579589686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA663FCCD0000000001030307) Feb 1 04:31:10 localhost python3.9[253594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/container-startup-config/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938269.5048542-275-51986481746936/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=472c5e922ae22c8bdcaef73d1ca73ce5597b440e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:11 localhost python3.9[253702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:12 localhost python3.9[253788]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938271.1218581-320-66902994009648/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:31:13 localhost systemd[1]: tmp-crun.RatLOy.mount: Deactivated successfully. Feb 1 04:31:13 localhost podman[253897]: 2026-02-01 09:31:13.879516437 +0000 UTC m=+0.093581899 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:31:13 localhost podman[253897]: 2026-02-01 09:31:13.909991459 +0000 UTC m=+0.124056971 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:31:13 localhost python3.9[253896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:13 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:31:14 localhost python3.9[254000]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938272.3197813-320-1082298037086/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:15 localhost python3.9[254108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:16 localhost python3.9[254194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938274.5805-320-49681259680869/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=1165b10d39ffebe4cf306f978262ccf67cc9110d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:31:16 localhost systemd[1]: tmp-crun.0AQkyz.mount: Deactivated successfully. Feb 1 04:31:16 localhost podman[254212]: 2026-02-01 09:31:16.875954048 +0000 UTC m=+0.085571524 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 04:31:16 localhost podman[254212]: 2026-02-01 09:31:16.910349865 +0000 UTC m=+0.119967341 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, name=ubi9/ubi-minimal, version=9.7, distribution-scope=public, config_id=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9) Feb 1 04:31:16 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:31:17 localhost python3.9[254322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:17 localhost python3.9[254408]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938276.992342-495-196133821449027/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=a74956efcd0a6873aac81fb89a0017e3332e5948 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:18 localhost python3.9[254516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57334 DF PROTO=TCP SPT=45564 DPT=9102 SEQ=579589686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6641D0D0000000001030307) Feb 1 04:31:19 localhost python3.9[254602]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938278.2622726-539-117828673137640/.source follow=False _original_basename=haproxy.j2 checksum=eddfecb822bb60e7241db0fd719c7552d2d25452 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:19 localhost python3.9[254710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:20 localhost python3.9[254796]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938279.4214728-539-134782723114566/.source follow=False _original_basename=dnsmasq.j2 checksum=a6b8b2fb47e7419d250eaee9e3565b13fff8f42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:31:20 localhost podman[254841]: 2026-02-01 09:31:20.873690434 +0000 UTC m=+0.088097308 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:31:20 localhost podman[254841]: 2026-02-01 09:31:20.884269548 +0000 UTC m=+0.098676422 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute) Feb 1 04:31:20 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:31:21 localhost python3.9[254922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:21 localhost python3.9[254977]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:22 localhost python3.9[255085]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:22 localhost python3.9[255171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938281.8483894-626-83065069201312/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:23 localhost python3.9[255315]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:31:24 localhost python3.9[255459]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:25 localhost python3.9[255587]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:25 localhost python3.9[255644]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:26 localhost python3.9[255754]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:26 localhost python3.9[255811]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:28 localhost python3.9[255921]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:28 localhost python3.9[256031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:29 localhost python3.9[256088]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:30 localhost podman[236852]: time="2026-02-01T09:31:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:31:30 localhost podman[236852]: @ - - [01/Feb/2026:09:31:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144458 "" "Go-http-client/1.1" Feb 1 04:31:30 localhost podman[236852]: @ - - [01/Feb/2026:09:31:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15871 "" "Go-http-client/1.1" Feb 1 04:31:30 localhost python3.9[256198]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:30 localhost python3.9[256255]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:31 localhost openstack_network_exporter[239388]: ERROR 09:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:31:31 localhost openstack_network_exporter[239388]: Feb 1 04:31:31 localhost openstack_network_exporter[239388]: ERROR 09:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:31:31 localhost openstack_network_exporter[239388]: Feb 1 04:31:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:31:31 localhost systemd[1]: tmp-crun.HP7Tjn.mount: Deactivated successfully. Feb 1 04:31:31 localhost podman[256281]: 2026-02-01 09:31:31.867698989 +0000 UTC m=+0.083120199 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:31:31 localhost podman[256281]: 2026-02-01 09:31:31.903659344 +0000 UTC m=+0.119080524 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:31:31 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:31:32 localhost python3.9[256389]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:31:32 localhost systemd[1]: Reloading. Feb 1 04:31:32 localhost systemd-rc-local-generator[256410]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:31:32 localhost systemd-sysv-generator[256414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18572 DF PROTO=TCP SPT=38724 DPT=9102 SEQ=758592592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66456510000000001030307) Feb 1 04:31:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18573 DF PROTO=TCP SPT=38724 DPT=9102 SEQ=758592592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6645A4D0000000001030307) Feb 1 04:31:34 localhost python3.9[256536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:35 localhost python3.9[256593]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57335 DF PROTO=TCP SPT=45564 DPT=9102 SEQ=579589686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6645D0D0000000001030307) Feb 1 04:31:35 localhost python3.9[256703]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:36 localhost python3.9[256760]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18574 DF PROTO=TCP SPT=38724 DPT=9102 SEQ=758592592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664624D0000000001030307) Feb 1 04:31:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:31:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:31:36 localhost podman[256816]: 2026-02-01 09:31:36.833001015 +0000 UTC m=+0.084015727 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:31:36 localhost podman[256816]: 2026-02-01 09:31:36.875370849 +0000 UTC m=+0.126385561 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:31:36 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:31:36 localhost podman[256817]: 2026-02-01 09:31:36.896641499 +0000 UTC m=+0.143835378 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:31:36 localhost podman[256817]: 2026-02-01 09:31:36.912738404 +0000 UTC m=+0.159932293 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:31:36 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:31:37 localhost python3.9[256918]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:31:37 localhost systemd[1]: Reloading. Feb 1 04:31:37 localhost systemd-sysv-generator[256946]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:31:37 localhost systemd-rc-local-generator[256941]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:37 localhost systemd[1]: Starting Create netns directory... Feb 1 04:31:37 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 1 04:31:37 localhost systemd[1]: Finished Create netns directory. Feb 1 04:31:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3000 DF PROTO=TCP SPT=58216 DPT=9102 SEQ=319385334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664670D0000000001030307) Feb 1 04:31:37 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 1 04:31:39 localhost python3.9[257070]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:40 localhost python3.9[257180]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:31:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18575 DF PROTO=TCP SPT=38724 DPT=9102 SEQ=758592592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664720D0000000001030307) Feb 1 04:31:41 localhost python3.9[257290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:31:41.748 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:31:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:31:41.749 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:31:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:31:41.750 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:31:41 localhost python3.9[257378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769938300.780264-1094-58194158394229/.source.json _original_basename=.2qojcxw3 follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:42 localhost python3.9[257486]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:31:44 localhost podman[257698]: 2026-02-01 09:31:44.880954521 +0000 UTC m=+0.092249210 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:31:44 localhost podman[257698]: 2026-02-01 09:31:44.89072841 +0000 UTC m=+0.102023109 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:31:44 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:31:46 localhost python3.9[257809]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Feb 1 04:31:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:31:47 localhost podman[257919]: 2026-02-01 09:31:47.091588331 +0000 UTC m=+0.073601209 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, release=1769056855, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 04:31:47 localhost podman[257919]: 2026-02-01 09:31:47.104336517 +0000 UTC m=+0.086349425 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, build-date=2026-01-22T05:09:47Z, release=1769056855) Feb 1 04:31:47 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:31:47 localhost python3.9[257920]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 1 04:31:48 localhost python3[258049]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json containers=['neutron_dhcp_agent'] log_base_path=/var/log/containers/stdouts debug=False Feb 1 04:31:48 localhost podman[258086]: Feb 1 04:31:48 localhost podman[258086]: 2026-02-01 09:31:48.615399158 +0000 UTC m=+0.071499267 container create b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_id=neutron_dhcp, managed_by=edpm_ansible) Feb 1 04:31:48 localhost podman[258086]: 2026-02-01 09:31:48.574610161 +0000 UTC m=+0.030710380 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:31:48 localhost python3[258049]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:31:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18576 DF PROTO=TCP SPT=38724 DPT=9102 SEQ=758592592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664930D0000000001030307) Feb 1 04:31:49 localhost python3.9[258233]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:31:50 localhost python3.9[258345]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:50 localhost python3.9[258400]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:31:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:31:51 localhost systemd[1]: tmp-crun.LMjYxa.mount: Deactivated successfully. Feb 1 04:31:51 localhost podman[258510]: 2026-02-01 09:31:51.395495516 +0000 UTC m=+0.091275561 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Feb 1 04:31:51 localhost podman[258510]: 2026-02-01 09:31:51.430470421 +0000 UTC m=+0.126250416 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:31:51 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:31:51 localhost python3.9[258509]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769938310.8014548-1328-3997270296890/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:52 localhost python3.9[258582]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:31:52 localhost systemd[1]: Reloading. Feb 1 04:31:52 localhost systemd-rc-local-generator[258604]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:31:52 localhost systemd-sysv-generator[258612]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:31:52 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:52 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:52 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:52 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:31:52 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:52 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:52 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:52 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:53 localhost python3.9[258673]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:31:53 localhost systemd[1]: Reloading. Feb 1 04:31:53 localhost systemd-sysv-generator[258704]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:31:53 localhost systemd-rc-local-generator[258698]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:31:53 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:53 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:53 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:53 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:31:53 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:53 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:53 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:53 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:31:53 localhost systemd[1]: Starting neutron_dhcp_agent container... Feb 1 04:31:53 localhost systemd[1]: Started libcrun container. Feb 1 04:31:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a989f60b3a2080a5252e7eea19f705cae8c281a273a318f9e5a90544c50aa0b/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 1 04:31:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a989f60b3a2080a5252e7eea19f705cae8c281a273a318f9e5a90544c50aa0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:31:53 localhost podman[258713]: 2026-02-01 09:31:53.702772935 +0000 UTC m=+0.138106228 container init b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=neutron_dhcp, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=neutron_dhcp_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 1 04:31:53 localhost podman[258713]: 2026-02-01 09:31:53.713357848 +0000 UTC m=+0.148691141 container start b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:31:53 localhost podman[258713]: neutron_dhcp_agent Feb 1 04:31:53 localhost systemd[1]: Started neutron_dhcp_agent container. Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: + sudo -E kolla_set_configs Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Validating config file Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Copying service configuration files Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Writing out command to execute Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/d91d8a949a4b5272256c667b5094a15f5e397c6793efbfa4186752b765c6923b Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: ++ cat /run_command Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: + CMD=/usr/bin/neutron-dhcp-agent Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: + ARGS= Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: + sudo kolla_copy_cacerts Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: + [[ ! -n '' ]] Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: + . kolla_extend_start Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: Running command: '/usr/bin/neutron-dhcp-agent' Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: + umask 0022 Feb 1 04:31:53 localhost neutron_dhcp_agent[258727]: + exec /usr/bin/neutron-dhcp-agent Feb 1 04:31:54 localhost python3.9[258849]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 1 04:31:55 localhost neutron_dhcp_agent[258727]: 2026-02-01 09:31:55.056 258731 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 1 04:31:55 localhost neutron_dhcp_agent[258727]: 2026-02-01 09:31:55.056 258731 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev44#033[00m Feb 1 04:31:55 localhost neutron_dhcp_agent[258727]: 2026-02-01 09:31:55.417 258731 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Feb 1 04:31:55 localhost neutron_dhcp_agent[258727]: 2026-02-01 09:31:55.924 258731 INFO neutron.agent.dhcp.agent [None req-6c0f86bd-e9af-4b61-b1b2-4e920b9c8b83 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 1 04:31:55 localhost neutron_dhcp_agent[258727]: 2026-02-01 09:31:55.924 258731 INFO neutron.agent.dhcp.agent [None req-6c0f86bd-e9af-4b61-b1b2-4e920b9c8b83 - - - - - -] Synchronizing state complete#033[00m Feb 1 04:31:56 localhost neutron_dhcp_agent[258727]: 2026-02-01 09:31:56.028 258731 INFO neutron.agent.dhcp.agent [None req-6c0f86bd-e9af-4b61-b1b2-4e920b9c8b83 - - - - - -] DHCP agent started#033[00m Feb 1 04:31:56 localhost python3.9[258960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:31:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:31:56.439 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:31:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:31:56.440 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:31:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:31:56.441 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:31:57 localhost python3.9[259050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769938315.8374841-1463-238373792297138/.source.yaml _original_basename=.m90oye6s follow=False checksum=552a83c15bca59d2cd0078e31025ce01db8bbba5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:31:58 localhost python3.9[259160]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:31:58 localhost systemd[1]: Stopping neutron_dhcp_agent container... Feb 1 04:31:58 localhost systemd[1]: libpod-b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d.scope: Deactivated successfully. Feb 1 04:31:58 localhost systemd[1]: libpod-b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d.scope: Consumed 1.773s CPU time. Feb 1 04:31:58 localhost podman[259164]: 2026-02-01 09:31:58.578702305 +0000 UTC m=+0.073640119 container died b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=neutron_dhcp, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:31:58 localhost podman[259164]: 2026-02-01 09:31:58.630085606 +0000 UTC m=+0.125023380 container cleanup b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Feb 1 04:31:58 localhost podman[259164]: neutron_dhcp_agent Feb 1 04:31:58 localhost podman[259204]: error opening file `/run/crun/b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d/status`: No such file or directory Feb 1 04:31:58 localhost podman[259190]: 2026-02-01 09:31:58.733681481 +0000 UTC m=+0.068723294 container cleanup b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=neutron_dhcp, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 1 04:31:58 localhost podman[259190]: neutron_dhcp_agent Feb 1 04:31:58 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Feb 1 04:31:58 localhost systemd[1]: Stopped neutron_dhcp_agent container. Feb 1 04:31:58 localhost systemd[1]: Starting neutron_dhcp_agent container... Feb 1 04:31:58 localhost systemd[1]: Started libcrun container. Feb 1 04:31:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a989f60b3a2080a5252e7eea19f705cae8c281a273a318f9e5a90544c50aa0b/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 1 04:31:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a989f60b3a2080a5252e7eea19f705cae8c281a273a318f9e5a90544c50aa0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:31:58 localhost podman[259206]: 2026-02-01 09:31:58.872037025 +0000 UTC m=+0.109553123 container init b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, container_name=neutron_dhcp_agent) Feb 1 04:31:58 localhost podman[259206]: 2026-02-01 09:31:58.880724332 +0000 UTC m=+0.118240430 container start b750db517c59365e833262da7c272dcf9cf70ce4eee90892de4d10b758378d2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-8e84beda5eab58443510a4045d90bb0c7d6ea956fd7905f3c2809e1ba1dc4a23'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 1 04:31:58 localhost podman[259206]: neutron_dhcp_agent Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: + sudo -E kolla_set_configs Feb 1 04:31:58 localhost systemd[1]: Started neutron_dhcp_agent container. Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Validating config file Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Copying service configuration files Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Writing out command to execute Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/d91d8a949a4b5272256c667b5094a15f5e397c6793efbfa4186752b765c6923b Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: ++ cat /run_command Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: + CMD=/usr/bin/neutron-dhcp-agent Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: + ARGS= Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: + sudo kolla_copy_cacerts Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: + [[ ! -n '' ]] Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: + . kolla_extend_start Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: Running command: '/usr/bin/neutron-dhcp-agent' Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: + umask 0022 Feb 1 04:31:58 localhost neutron_dhcp_agent[259221]: + exec /usr/bin/neutron-dhcp-agent Feb 1 04:31:59 localhost systemd-logind[761]: Session 58 logged out. Waiting for processes to exit. Feb 1 04:31:59 localhost systemd[1]: session-58.scope: Deactivated successfully. Feb 1 04:31:59 localhost systemd[1]: session-58.scope: Consumed 34.604s CPU time. Feb 1 04:31:59 localhost systemd-logind[761]: Removed session 58. Feb 1 04:32:00 localhost podman[236852]: time="2026-02-01T09:32:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:32:00 localhost podman[236852]: @ - - [01/Feb/2026:09:32:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:32:00 localhost podman[236852]: @ - - [01/Feb/2026:09:32:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16310 "" "Go-http-client/1.1" Feb 1 04:32:00 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:32:00.149 259225 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 1 04:32:00 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:32:00.150 259225 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev44#033[00m Feb 1 04:32:00 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:32:00.523 259225 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Feb 1 04:32:01 localhost openstack_network_exporter[239388]: ERROR 09:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:32:01 localhost openstack_network_exporter[239388]: Feb 1 04:32:01 localhost openstack_network_exporter[239388]: ERROR 09:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:32:01 localhost openstack_network_exporter[239388]: Feb 1 04:32:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:32:01.910 259225 INFO neutron.agent.dhcp.agent [None req-4946e334-211f-4f65-8db9-f55ae8c5f289 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 1 04:32:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:32:01.911 259225 INFO neutron.agent.dhcp.agent [None req-4946e334-211f-4f65-8db9-f55ae8c5f289 - - - - - -] Synchronizing state complete#033[00m Feb 1 04:32:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:32:01.938 259225 INFO neutron.agent.dhcp.agent [None req-4946e334-211f-4f65-8db9-f55ae8c5f289 - - - - - -] DHCP agent started#033[00m Feb 1 04:32:02 localhost nova_compute[225585]: 2026-02-01 09:32:02.217 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:32:02 localhost systemd[1]: tmp-crun.7r9AUN.mount: Deactivated successfully. Feb 1 04:32:02 localhost podman[259254]: 2026-02-01 09:32:02.869402311 +0000 UTC m=+0.084942543 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:32:02 localhost podman[259254]: 2026-02-01 09:32:02.901781608 +0000 UTC m=+0.117321880 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:32:02 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:32:02 localhost nova_compute[225585]: 2026-02-01 09:32:02.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29003 DF PROTO=TCP SPT=55632 DPT=9102 SEQ=1180852269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664CB810000000001030307) Feb 1 04:32:03 localhost nova_compute[225585]: 2026-02-01 09:32:03.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:03 localhost nova_compute[225585]: 2026-02-01 09:32:03.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:32:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29004 DF PROTO=TCP SPT=55632 DPT=9102 SEQ=1180852269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664CF8D0000000001030307) Feb 1 04:32:04 localhost nova_compute[225585]: 2026-02-01 09:32:04.991 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:05 localhost nova_compute[225585]: 2026-02-01 09:32:05.015 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18577 DF PROTO=TCP SPT=38724 DPT=9102 SEQ=758592592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664D30E0000000001030307) Feb 1 04:32:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29005 DF PROTO=TCP SPT=55632 DPT=9102 SEQ=1180852269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664D78E0000000001030307) Feb 1 04:32:06 localhost nova_compute[225585]: 2026-02-01 09:32:06.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:06 localhost nova_compute[225585]: 2026-02-01 09:32:06.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:06 localhost nova_compute[225585]: 2026-02-01 09:32:06.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.026 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.027 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.027 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.027 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.028 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.471 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:32:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57336 DF PROTO=TCP SPT=45564 DPT=9102 SEQ=579589686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664DB0E0000000001030307) Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.633 225589 WARNING nova.virt.libvirt.driver [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.635 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12923MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.635 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.635 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.719 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.720 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:32:07 localhost nova_compute[225585]: 2026-02-01 09:32:07.735 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:32:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:32:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:32:07 localhost systemd[1]: tmp-crun.6reW7a.mount: Deactivated successfully. Feb 1 04:32:07 localhost podman[259300]: 2026-02-01 09:32:07.881361567 +0000 UTC m=+0.096844457 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:32:07 localhost podman[259300]: 2026-02-01 09:32:07.924316528 +0000 UTC m=+0.139799468 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller) Feb 1 04:32:07 localhost systemd[1]: tmp-crun.6FYcSA.mount: Deactivated successfully. Feb 1 04:32:07 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:32:07 localhost podman[259301]: 2026-02-01 09:32:07.929230194 +0000 UTC m=+0.141858729 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:32:08 localhost podman[259301]: 2026-02-01 09:32:08.009503259 +0000 UTC m=+0.222131854 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:32:08 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:32:08 localhost nova_compute[225585]: 2026-02-01 09:32:08.264 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:32:08 localhost nova_compute[225585]: 2026-02-01 09:32:08.269 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:32:08 localhost nova_compute[225585]: 2026-02-01 09:32:08.284 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:32:08 localhost nova_compute[225585]: 2026-02-01 09:32:08.287 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:32:08 localhost nova_compute[225585]: 2026-02-01 09:32:08.287 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:32:09 localhost nova_compute[225585]: 2026-02-01 09:32:09.283 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:09 localhost nova_compute[225585]: 2026-02-01 09:32:09.284 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:32:09 localhost nova_compute[225585]: 2026-02-01 09:32:09.284 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:32:09 localhost nova_compute[225585]: 2026-02-01 09:32:09.285 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:32:09 localhost nova_compute[225585]: 2026-02-01 09:32:09.302 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:32:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29006 DF PROTO=TCP SPT=55632 DPT=9102 SEQ=1180852269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA664E74D0000000001030307) Feb 1 04:32:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:32:15 localhost podman[259366]: 2026-02-01 09:32:15.862436895 +0000 UTC m=+0.079223285 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 04:32:15 localhost podman[259366]: 2026-02-01 09:32:15.896785961 +0000 UTC m=+0.113572311 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:32:15 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:32:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:32:17 localhost podman[259385]: 2026-02-01 09:32:17.875447248 +0000 UTC m=+0.076495826 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal) Feb 1 04:32:17 localhost podman[259385]: 2026-02-01 09:32:17.887125157 +0000 UTC m=+0.088173705 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9/ubi-minimal, release=1769056855, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:32:17 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:32:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29007 DF PROTO=TCP SPT=55632 DPT=9102 SEQ=1180852269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665070D0000000001030307) Feb 1 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:32:21 localhost podman[259406]: 2026-02-01 09:32:21.866371249 +0000 UTC m=+0.077644011 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:32:21 localhost podman[259406]: 2026-02-01 09:32:21.901755759 +0000 UTC m=+0.113028511 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:32:21 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:32:23 localhost sshd[259426]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:32:30 localhost podman[236852]: time="2026-02-01T09:32:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:32:30 localhost podman[236852]: @ - - [01/Feb/2026:09:32:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:32:30 localhost podman[236852]: @ - - [01/Feb/2026:09:32:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16300 "" "Go-http-client/1.1" Feb 1 04:32:31 localhost openstack_network_exporter[239388]: ERROR 09:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:32:31 localhost openstack_network_exporter[239388]: Feb 1 04:32:31 localhost openstack_network_exporter[239388]: ERROR 09:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:32:31 localhost openstack_network_exporter[239388]: Feb 1 04:32:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36026 DF PROTO=TCP SPT=49088 DPT=9102 SEQ=3408466566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66540B10000000001030307) Feb 1 04:32:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:32:33 localhost podman[259514]: 2026-02-01 09:32:33.866003413 +0000 UTC m=+0.079993623 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:32:33 localhost podman[259514]: 2026-02-01 09:32:33.873233405 +0000 UTC m=+0.087223595 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:32:33 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:32:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36027 DF PROTO=TCP SPT=49088 DPT=9102 SEQ=3408466566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66544CE0000000001030307) Feb 1 04:32:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29008 DF PROTO=TCP SPT=55632 DPT=9102 SEQ=1180852269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665470D0000000001030307) Feb 1 04:32:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36028 DF PROTO=TCP SPT=49088 DPT=9102 SEQ=3408466566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6654CCE0000000001030307) Feb 1 04:32:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18578 DF PROTO=TCP SPT=38724 DPT=9102 SEQ=758592592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665510D0000000001030307) Feb 1 04:32:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:32:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:32:38 localhost podman[259538]: 2026-02-01 09:32:38.867332833 +0000 UTC m=+0.078602940 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:32:38 localhost podman[259538]: 2026-02-01 09:32:38.877709463 +0000 UTC m=+0.088979640 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:32:38 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:32:38 localhost podman[259537]: 2026-02-01 09:32:38.965331419 +0000 UTC m=+0.179890467 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:32:39 localhost podman[259537]: 2026-02-01 09:32:39.074775838 +0000 UTC m=+0.289334906 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller) Feb 1 04:32:39 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:32:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36029 DF PROTO=TCP SPT=49088 DPT=9102 SEQ=3408466566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6655C8D0000000001030307) Feb 1 04:32:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:32:41.749 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:32:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:32:41.749 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:32:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:32:41.750 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:32:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:32:46 localhost podman[259584]: 2026-02-01 09:32:46.862338134 +0000 UTC m=+0.077653047 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:32:46 localhost podman[259584]: 2026-02-01 09:32:46.892548406 +0000 UTC m=+0.107863259 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:32:46 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:32:48 localhost sshd[259602]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:32:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:32:48 localhost systemd-logind[761]: New session 59 of user zuul. Feb 1 04:32:48 localhost systemd[1]: Started Session 59 of User zuul. Feb 1 04:32:48 localhost podman[259604]: 2026-02-01 09:32:48.744431231 +0000 UTC m=+0.077419560 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, release=1769056855, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9/ubi-minimal) Feb 1 04:32:48 localhost podman[259604]: 2026-02-01 09:32:48.760768856 +0000 UTC m=+0.093757225 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, release=1769056855, name=ubi9/ubi-minimal) Feb 1 04:32:48 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:32:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36030 DF PROTO=TCP SPT=49088 DPT=9102 SEQ=3408466566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6657D0D0000000001030307) Feb 1 04:32:49 localhost python3.9[259732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:32:51 localhost python3.9[259844]: ansible-ansible.builtin.service_facts Invoked Feb 1 04:32:51 localhost network[259861]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:32:51 localhost network[259862]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:32:51 localhost network[259863]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:32:52 localhost systemd[1]: tmp-crun.fsdkc1.mount: Deactivated successfully. Feb 1 04:32:52 localhost podman[259869]: 2026-02-01 09:32:52.881922654 +0000 UTC m=+0.093299210 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true) Feb 1 04:32:52 localhost podman[259869]: 2026-02-01 09:32:52.919741771 +0000 UTC m=+0.131118397 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:32:52 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:32:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:32:58 localhost python3.9[260113]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 1 04:32:59 localhost python3.9[260176]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:33:00 localhost podman[236852]: time="2026-02-01T09:33:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:33:00 localhost podman[236852]: @ - - [01/Feb/2026:09:33:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:33:00 localhost podman[236852]: @ - - [01/Feb/2026:09:33:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16315 "" "Go-http-client/1.1" Feb 1 04:33:01 localhost openstack_network_exporter[239388]: ERROR 09:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:33:01 localhost openstack_network_exporter[239388]: Feb 1 04:33:01 localhost openstack_network_exporter[239388]: ERROR 09:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:33:01 localhost openstack_network_exporter[239388]: Feb 1 04:33:02 localhost nova_compute[225585]: 2026-02-01 09:33:02.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:33:02 localhost nova_compute[225585]: 2026-02-01 09:33:02.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:33:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:33:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5544 DF PROTO=TCP SPT=34100 DPT=9102 SEQ=2814826323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665B5E10000000001030307) Feb 1 04:33:04 localhost python3.9[260288]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:33:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5545 DF PROTO=TCP SPT=34100 DPT=9102 SEQ=2814826323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665B9CD0000000001030307) Feb 1 04:33:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:33:04 localhost podman[260360]: 2026-02-01 09:33:04.868978548 +0000 UTC m=+0.080846276 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:33:04 localhost podman[260360]: 2026-02-01 09:33:04.881622248 +0000 UTC m=+0.093489936 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:33:04 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:33:04 localhost nova_compute[225585]: 2026-02-01 09:33:04.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:33:04 localhost nova_compute[225585]: 2026-02-01 09:33:04.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:33:05 localhost python3.9[260421]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:33:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36031 DF PROTO=TCP SPT=49088 DPT=9102 SEQ=3408466566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665BD0D0000000001030307) Feb 1 04:33:05 localhost nova_compute[225585]: 2026-02-01 09:33:05.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:33:05 localhost python3.9[260532]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:33:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5546 DF PROTO=TCP SPT=34100 DPT=9102 SEQ=2814826323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665C1CD0000000001030307) Feb 1 04:33:06 localhost nova_compute[225585]: 2026-02-01 09:33:06.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:33:06 localhost nova_compute[225585]: 2026-02-01 09:33:06.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.016 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.016 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.016 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.017 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.017 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:33:07 localhost python3.9[260644]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29009 DF PROTO=TCP SPT=55632 DPT=9102 SEQ=1180852269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665C50D0000000001030307) Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.482 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.670 225589 WARNING nova.virt.libvirt.driver [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.672 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12912MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.673 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.674 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.750 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.751 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:33:07 localhost nova_compute[225585]: 2026-02-01 09:33:07.770 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:33:08 localhost nova_compute[225585]: 2026-02-01 09:33:08.234 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:33:08 localhost nova_compute[225585]: 2026-02-01 09:33:08.240 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:33:08 localhost nova_compute[225585]: 2026-02-01 09:33:08.256 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:33:08 localhost nova_compute[225585]: 2026-02-01 09:33:08.258 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:33:08 localhost nova_compute[225585]: 2026-02-01 09:33:08.259 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:33:08 localhost python3.9[260796]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:33:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:33:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:33:09 localhost podman[260802]: 2026-02-01 09:33:09.522651737 +0000 UTC m=+0.075232102 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:33:09 localhost podman[260802]: 2026-02-01 09:33:09.534586525 +0000 UTC m=+0.087166810 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:33:09 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:33:09 localhost podman[260801]: 2026-02-01 09:33:09.581248155 +0000 UTC m=+0.137644237 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller) Feb 1 04:33:09 localhost podman[260801]: 2026-02-01 09:33:09.640845344 +0000 UTC m=+0.197241426 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:33:09 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:33:10 localhost nova_compute[225585]: 2026-02-01 09:33:10.256 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:33:10 localhost nova_compute[225585]: 2026-02-01 09:33:10.257 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:33:10 localhost nova_compute[225585]: 2026-02-01 09:33:10.258 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:33:10 localhost nova_compute[225585]: 2026-02-01 09:33:10.258 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:33:10 localhost nova_compute[225585]: 2026-02-01 09:33:10.276 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:33:10 localhost nova_compute[225585]: 2026-02-01 09:33:10.276 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:33:10 localhost python3.9[260958]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:33:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5547 DF PROTO=TCP SPT=34100 DPT=9102 SEQ=2814826323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665D18E0000000001030307) Feb 1 04:33:12 localhost python3.9[261068]: ansible-ansible.builtin.service_facts Invoked Feb 1 04:33:12 localhost network[261085]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:33:12 localhost network[261086]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:33:12 localhost network[261087]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:33:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:33:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:33:17 localhost podman[261200]: 2026-02-01 09:33:17.864954299 +0000 UTC m=+0.079445793 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3) Feb 1 04:33:17 localhost podman[261200]: 2026-02-01 09:33:17.873646437 +0000 UTC m=+0.088137971 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:33:17 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:33:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5548 DF PROTO=TCP SPT=34100 DPT=9102 SEQ=2814826323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA665F10D0000000001030307) Feb 1 04:33:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:33:19 localhost podman[261226]: 2026-02-01 09:33:19.071806724 +0000 UTC m=+0.079786783 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1769056855, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 04:33:19 localhost podman[261226]: 2026-02-01 09:33:19.084910428 +0000 UTC m=+0.092890467 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, version=9.7, architecture=x86_64, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible) Feb 1 04:33:19 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:33:20 localhost python3.9[261359]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:33:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:33:23 localhost podman[261379]: 2026-02-01 09:33:23.865796512 +0000 UTC m=+0.081436784 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute) Feb 1 04:33:23 localhost podman[261379]: 2026-02-01 09:33:23.876312676 +0000 UTC m=+0.091952988 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 1 04:33:23 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:33:25 localhost python3.9[261490]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 1 04:33:26 localhost python3.9[261600]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Feb 1 04:33:27 localhost python3.9[261710]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:33:27 localhost python3.9[261767]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:28 localhost python3.9[261913]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:29 localhost python3.9[262055]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:33:29 localhost python3.9[262166]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:33:30 localhost podman[236852]: time="2026-02-01T09:33:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:33:30 localhost podman[236852]: @ - - [01/Feb/2026:09:33:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:33:30 localhost podman[236852]: @ - - [01/Feb/2026:09:33:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16316 "" "Go-http-client/1.1" Feb 1 04:33:30 localhost python3.9[262295]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:33:31 localhost openstack_network_exporter[239388]: ERROR 09:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:33:31 localhost openstack_network_exporter[239388]: Feb 1 04:33:31 localhost openstack_network_exporter[239388]: ERROR 09:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:33:31 localhost openstack_network_exporter[239388]: Feb 1 04:33:32 localhost python3.9[262407]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:33:33 localhost python3.9[262518]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45624 DF PROTO=TCP SPT=60988 DPT=9102 SEQ=3044147456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6662B110000000001030307) Feb 1 04:33:34 localhost python3.9[262628]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45625 DF PROTO=TCP SPT=60988 DPT=9102 SEQ=3044147456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6662F0D0000000001030307) Feb 1 04:33:34 localhost python3.9[262738]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5549 DF PROTO=TCP SPT=34100 DPT=9102 SEQ=2814826323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666310D0000000001030307) Feb 1 04:33:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:33:35 localhost podman[262849]: 2026-02-01 09:33:35.366778228 +0000 UTC m=+0.083610850 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:33:35 localhost podman[262849]: 2026-02-01 09:33:35.404767771 +0000 UTC m=+0.121600393 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:33:35 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:33:35 localhost python3.9[262848]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:36 localhost python3.9[262981]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45626 DF PROTO=TCP SPT=60988 DPT=9102 SEQ=3044147456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666370D0000000001030307) Feb 1 04:33:37 localhost python3.9[263091]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:33:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36032 DF PROTO=TCP SPT=49088 DPT=9102 SEQ=3408466566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6663B0E0000000001030307) Feb 1 04:33:38 localhost python3.9[263203]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:33:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:33:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:33:39 localhost podman[263316]: 2026-02-01 09:33:39.814363259 +0000 UTC m=+0.080942069 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:33:39 localhost systemd[1]: tmp-crun.z8B5vx.mount: Deactivated successfully. Feb 1 04:33:39 localhost podman[263317]: 2026-02-01 09:33:39.901940431 +0000 UTC m=+0.163378022 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:33:39 localhost podman[263316]: 2026-02-01 09:33:39.905573992 +0000 UTC m=+0.172152782 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:33:39 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:33:39 localhost podman[263317]: 2026-02-01 09:33:39.964131349 +0000 UTC m=+0.225569000 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:33:39 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:33:40 localhost python3.9[263315]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:33:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45627 DF PROTO=TCP SPT=60988 DPT=9102 SEQ=3044147456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66646CD0000000001030307) Feb 1 04:33:41 localhost python3.9[263475]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 1 04:33:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:33:41.750 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:33:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:33:41.750 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:33:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:33:41.750 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:33:41 localhost python3.9[263585]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Feb 1 04:33:42 localhost python3.9[263695]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:33:43 localhost python3.9[263752]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:43 localhost python3.9[263862]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:44 localhost python3.9[263972]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 1 04:33:46 localhost sshd[263975]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:33:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:33:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45628 DF PROTO=TCP SPT=60988 DPT=9102 SEQ=3044147456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666670D0000000001030307) Feb 1 04:33:48 localhost systemd[1]: tmp-crun.vyuohI.mount: Deactivated successfully. Feb 1 04:33:48 localhost podman[264079]: 2026-02-01 09:33:48.875227031 +0000 UTC m=+0.087323895 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:33:48 localhost podman[264079]: 2026-02-01 09:33:48.881785584 +0000 UTC m=+0.093882458 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Feb 1 04:33:48 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:33:49 localhost python3.9[264090]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 1 04:33:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:33:49 localhost podman[264159]: 2026-02-01 09:33:49.86959421 +0000 UTC m=+0.084162258 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1769056855, version=9.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, name=ubi9/ubi-minimal) Feb 1 04:33:49 localhost podman[264159]: 2026-02-01 09:33:49.887569145 +0000 UTC m=+0.102137153 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.) Feb 1 04:33:49 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:33:50 localhost python3.9[264238]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:33:51 localhost python3.9[264348]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:33:51 localhost systemd[1]: Reloading. Feb 1 04:33:52 localhost systemd-sysv-generator[264378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:33:52 localhost systemd-rc-local-generator[264373]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:33:53 localhost python3.9[264492]: ansible-ansible.builtin.service_facts Invoked Feb 1 04:33:53 localhost network[264509]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 1 04:33:53 localhost network[264510]: 'network-scripts' will be removed from distribution in near future. Feb 1 04:33:53 localhost network[264511]: It is advised to switch to 'NetworkManager' instead for network management. Feb 1 04:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:33:54 localhost systemd[1]: tmp-crun.pQwHhm.mount: Deactivated successfully. Feb 1 04:33:54 localhost podman[264519]: 2026-02-01 09:33:54.525563249 +0000 UTC m=+0.101199013 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 1 04:33:54 localhost podman[264519]: 2026-02-01 09:33:54.564425468 +0000 UTC m=+0.140061232 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127) Feb 1 04:33:54 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:33:58 localhost python3.9[264763]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:33:59 localhost python3.9[264874]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:34:00 localhost podman[236852]: time="2026-02-01T09:34:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:34:00 localhost podman[236852]: @ - - [01/Feb/2026:09:34:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:34:00 localhost podman[236852]: @ - - [01/Feb/2026:09:34:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16312 "" "Go-http-client/1.1" Feb 1 04:34:00 localhost python3.9[264985]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:34:01 localhost python3.9[265096]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:34:01 localhost openstack_network_exporter[239388]: ERROR 09:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:34:01 localhost openstack_network_exporter[239388]: Feb 1 04:34:01 localhost openstack_network_exporter[239388]: ERROR 09:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:34:01 localhost openstack_network_exporter[239388]: Feb 1 04:34:02 localhost python3.9[265207]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:34:02 localhost python3.9[265318]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:34:02 localhost nova_compute[225585]: 2026-02-01 09:34:02.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:02 localhost nova_compute[225585]: 2026-02-01 09:34:02.996 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 1 04:34:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51450 DF PROTO=TCP SPT=43886 DPT=9102 SEQ=3716749661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666A0410000000001030307) Feb 1 04:34:03 localhost python3.9[265429]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:34:04 localhost nova_compute[225585]: 2026-02-01 09:34:04.013 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51451 DF PROTO=TCP SPT=43886 DPT=9102 SEQ=3716749661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666A44D0000000001030307) Feb 1 04:34:04 localhost nova_compute[225585]: 2026-02-01 09:34:04.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45629 DF PROTO=TCP SPT=60988 DPT=9102 SEQ=3044147456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666A70E0000000001030307) Feb 1 04:34:05 localhost python3.9[265540]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:34:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:34:05 localhost podman[265542]: 2026-02-01 09:34:05.868990325 +0000 UTC m=+0.081754513 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:34:05 localhost podman[265542]: 2026-02-01 09:34:05.881800051 +0000 UTC m=+0.094564239 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:34:05 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:34:05 localhost nova_compute[225585]: 2026-02-01 09:34:05.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:05 localhost nova_compute[225585]: 2026-02-01 09:34:05.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:05 localhost nova_compute[225585]: 2026-02-01 09:34:05.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:34:05 localhost nova_compute[225585]: 2026-02-01 09:34:05.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:05 localhost nova_compute[225585]: 2026-02-01 09:34:05.996 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 1 04:34:06 localhost nova_compute[225585]: 2026-02-01 09:34:06.020 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 1 04:34:06 localhost nova_compute[225585]: 2026-02-01 09:34:06.020 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51452 DF PROTO=TCP SPT=43886 DPT=9102 SEQ=3716749661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666AC4D0000000001030307) Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.034 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.066 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.066 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.067 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.067 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.067 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:34:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5550 DF PROTO=TCP SPT=34100 DPT=9102 SEQ=2814826323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666AF0D0000000001030307) Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.516 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.704 225589 WARNING nova.virt.libvirt.driver [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.705 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12909MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.706 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.706 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.826 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.827 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.885 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.969 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.970 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:34:07 localhost nova_compute[225585]: 2026-02-01 09:34:07.986 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 04:34:08 localhost nova_compute[225585]: 2026-02-01 09:34:08.009 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 04:34:08 localhost nova_compute[225585]: 2026-02-01 09:34:08.032 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:34:08 localhost nova_compute[225585]: 2026-02-01 09:34:08.489 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:34:08 localhost nova_compute[225585]: 2026-02-01 09:34:08.495 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:34:08 localhost nova_compute[225585]: 2026-02-01 09:34:08.509 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:34:08 localhost nova_compute[225585]: 2026-02-01 09:34:08.512 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:34:08 localhost nova_compute[225585]: 2026-02-01 09:34:08.512 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:34:09 localhost python3.9[265717]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:09 localhost nova_compute[225585]: 2026-02-01 09:34:09.469 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:09 localhost nova_compute[225585]: 2026-02-01 09:34:09.470 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:09 localhost nova_compute[225585]: 2026-02-01 09:34:09.490 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:09 localhost nova_compute[225585]: 2026-02-01 09:34:09.490 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:09 localhost python3.9[265827]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:34:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:34:10 localhost systemd[1]: tmp-crun.rbARw2.mount: Deactivated successfully. Feb 1 04:34:10 localhost podman[265937]: 2026-02-01 09:34:10.327953576 +0000 UTC m=+0.080211665 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:34:10 localhost podman[265938]: 2026-02-01 09:34:10.341259227 +0000 UTC m=+0.087202791 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:34:10 localhost podman[265938]: 2026-02-01 09:34:10.377870157 +0000 UTC m=+0.123813751 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:34:10 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:34:10 localhost podman[265937]: 2026-02-01 09:34:10.402885259 +0000 UTC m=+0.155143338 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:34:10 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:34:10 localhost python3.9[265944]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51453 DF PROTO=TCP SPT=43886 DPT=9102 SEQ=3716749661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666BC0D0000000001030307) Feb 1 04:34:10 localhost nova_compute[225585]: 2026-02-01 09:34:10.996 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:10 localhost nova_compute[225585]: 2026-02-01 09:34:10.996 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:34:10 localhost nova_compute[225585]: 2026-02-01 09:34:10.996 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:34:11 localhost nova_compute[225585]: 2026-02-01 09:34:11.028 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:34:11 localhost python3.9[266094]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:11 localhost python3.9[266204]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:12 localhost python3.9[266314]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:13 localhost python3.9[266424]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:13 localhost python3.9[266534]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:15 localhost python3.9[266644]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:15 localhost python3.9[266754]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:16 localhost python3.9[266864]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:17 localhost python3.9[266974]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:18 localhost python3.9[267084]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:18 localhost python3.9[267194]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51454 DF PROTO=TCP SPT=43886 DPT=9102 SEQ=3716749661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA666DD0E0000000001030307) Feb 1 04:34:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:34:19 localhost systemd[1]: tmp-crun.0edbUU.mount: Deactivated successfully. Feb 1 04:34:19 localhost podman[267212]: 2026-02-01 09:34:19.872462891 +0000 UTC m=+0.091078920 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:34:19 localhost podman[267212]: 2026-02-01 09:34:19.884662598 +0000 UTC m=+0.103278607 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:34:19 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:34:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:34:19 localhost podman[267280]: 2026-02-01 09:34:19.997266602 +0000 UTC m=+0.075104148 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-01-22T05:09:47Z, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1769056855, distribution-scope=public, vcs-type=git, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, version=9.7) Feb 1 04:34:20 localhost podman[267280]: 2026-02-01 09:34:20.014769762 +0000 UTC m=+0.092607268 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, version=9.7, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, release=1769056855, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 04:34:20 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:34:20 localhost python3.9[267341]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:20 localhost python3.9[267451]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:34:21 localhost python3.9[267561]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:34:22 localhost python3.9[267671]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 1 04:34:23 localhost python3.9[267781]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 1 04:34:23 localhost systemd[1]: Reloading. Feb 1 04:34:23 localhost systemd-sysv-generator[267810]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:34:23 localhost systemd-rc-local-generator[267803]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:34:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:34:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:34:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:34:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:34:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:34:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:34:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:34:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:34:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:34:24 localhost systemd[1]: tmp-crun.KRO4PJ.mount: Deactivated successfully. Feb 1 04:34:24 localhost podman[267903]: 2026-02-01 09:34:24.891223564 +0000 UTC m=+0.095817667 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:34:24 localhost podman[267903]: 2026-02-01 09:34:24.90468909 +0000 UTC m=+0.109283233 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Feb 1 04:34:24 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:34:25 localhost python3.9[267946]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:34:25 localhost python3.9[268057]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:34:26 localhost python3.9[268168]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:34:27 localhost python3.9[268279]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:34:29 localhost python3.9[268390]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:34:30 localhost podman[236852]: time="2026-02-01T09:34:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:34:30 localhost podman[236852]: @ - - [01/Feb/2026:09:34:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:34:30 localhost podman[236852]: @ - - [01/Feb/2026:09:34:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16314 "" "Go-http-client/1.1" Feb 1 04:34:30 localhost python3.9[268501]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:34:30 localhost python3.9[268662]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:34:31 localhost openstack_network_exporter[239388]: ERROR 09:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:34:31 localhost openstack_network_exporter[239388]: Feb 1 04:34:31 localhost openstack_network_exporter[239388]: ERROR 09:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:34:31 localhost openstack_network_exporter[239388]: Feb 1 04:34:32 localhost python3.9[268866]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:34:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43885 DF PROTO=TCP SPT=55848 DPT=9102 SEQ=2030436982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66715710000000001030307) Feb 1 04:34:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43886 DF PROTO=TCP SPT=55848 DPT=9102 SEQ=2030436982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667198D0000000001030307) Feb 1 04:34:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51455 DF PROTO=TCP SPT=43886 DPT=9102 SEQ=3716749661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6671D0E0000000001030307) Feb 1 04:34:35 localhost python3.9[268977]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:34:36 localhost systemd[1]: tmp-crun.K2bqA5.mount: Deactivated successfully. Feb 1 04:34:36 localhost podman[269088]: 2026-02-01 09:34:36.156234159 +0000 UTC m=+0.099483531 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:34:36 localhost podman[269088]: 2026-02-01 09:34:36.165107813 +0000 UTC m=+0.108357165 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:34:36 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:34:36 localhost python3.9[269087]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43887 DF PROTO=TCP SPT=55848 DPT=9102 SEQ=2030436982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667218D0000000001030307) Feb 1 04:34:36 localhost nova_compute[225585]: 2026-02-01 09:34:36.818 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:34:36 localhost python3.9[269219]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45630 DF PROTO=TCP SPT=60988 DPT=9102 SEQ=3044147456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667250D0000000001030307) Feb 1 04:34:37 localhost python3.9[269329]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:38 localhost python3.9[269439]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:38 localhost python3.9[269549]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:39 localhost python3.9[269659]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:40 localhost python3.9[269769]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:34:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:34:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43888 DF PROTO=TCP SPT=55848 DPT=9102 SEQ=2030436982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667314E0000000001030307) Feb 1 04:34:40 localhost systemd[1]: tmp-crun.OnqsCU.mount: Deactivated successfully. Feb 1 04:34:40 localhost systemd[1]: tmp-crun.JKNv6n.mount: Deactivated successfully. Feb 1 04:34:40 localhost podman[269880]: 2026-02-01 09:34:40.735236073 +0000 UTC m=+0.138965727 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:34:40 localhost podman[269881]: 2026-02-01 09:34:40.70562587 +0000 UTC m=+0.107765876 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:34:40 localhost podman[269881]: 2026-02-01 09:34:40.808630088 +0000 UTC m=+0.210770084 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:34:40 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:34:40 localhost python3.9[269879]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:40 localhost podman[269880]: 2026-02-01 09:34:40.852835802 +0000 UTC m=+0.256565496 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:34:40 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:34:41 localhost python3.9[270038]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:34:41.751 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:34:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:34:41.751 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:34:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:34:41.751 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:34:48 localhost python3.9[270148]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Feb 1 04:34:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43889 DF PROTO=TCP SPT=55848 DPT=9102 SEQ=2030436982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667510D0000000001030307) Feb 1 04:34:50 localhost sshd[270167]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:34:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:34:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:34:50 localhost systemd-logind[761]: New session 60 of user zuul. Feb 1 04:34:50 localhost systemd[1]: Started Session 60 of User zuul. Feb 1 04:34:50 localhost podman[270169]: 2026-02-01 09:34:50.598258281 +0000 UTC m=+0.095503150 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, architecture=x86_64, build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.7, io.openshift.expose-services=, release=1769056855) Feb 1 04:34:50 localhost podman[270170]: 2026-02-01 09:34:50.640365977 +0000 UTC m=+0.135944545 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true) Feb 1 04:34:50 localhost podman[270170]: 2026-02-01 09:34:50.649770737 +0000 UTC m=+0.145349365 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 1 04:34:50 localhost podman[270169]: 2026-02-01 09:34:50.660696073 +0000 UTC m=+0.157940962 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container) Feb 1 04:34:50 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:34:50 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:34:50 localhost systemd[1]: session-60.scope: Deactivated successfully. Feb 1 04:34:50 localhost systemd-logind[761]: Session 60 logged out. Waiting for processes to exit. Feb 1 04:34:50 localhost systemd-logind[761]: Removed session 60. Feb 1 04:34:51 localhost python3.9[270313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:34:51 localhost python3.9[270399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938490.9162667-2338-38331407707067/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:52 localhost python3.9[270507]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:34:52 localhost python3.9[270562]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:53 localhost python3.9[270670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:34:54 localhost python3.9[270756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938493.128792-2338-81756465378164/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:54 localhost python3.9[270864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:34:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:34:56 localhost podman[270914]: 2026-02-01 09:34:56.158093826 +0000 UTC m=+0.078961941 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:34:56 localhost podman[270914]: 2026-02-01 09:34:56.19267586 +0000 UTC m=+0.113543925 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:34:56 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:34:57 localhost python3.9[270969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938494.229365-2338-84476054753385/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=f97201355591685d5a25f9693d35e9cd6d9ded96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:58 localhost python3.9[271077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:34:58 localhost python3.9[271163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938497.680119-2338-102621600784515/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:34:59 localhost python3.9[271271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:34:59 localhost python3.9[271357]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769938498.8194146-2338-4323519888209/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:35:00 localhost podman[236852]: time="2026-02-01T09:35:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:35:00 localhost podman[236852]: @ - - [01/Feb/2026:09:35:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:35:00 localhost podman[236852]: @ - - [01/Feb/2026:09:35:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16305 "" "Go-http-client/1.1" Feb 1 04:35:00 localhost python3.9[271467]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:35:01 localhost python3.9[271577]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:35:01 localhost openstack_network_exporter[239388]: ERROR 09:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:35:01 localhost openstack_network_exporter[239388]: Feb 1 04:35:01 localhost openstack_network_exporter[239388]: ERROR 09:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:35:01 localhost openstack_network_exporter[239388]: Feb 1 04:35:02 localhost python3.9[271687]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.402 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:35:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:35:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29866 DF PROTO=TCP SPT=58798 DPT=9102 SEQ=3651054409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6678AA10000000001030307) Feb 1 04:35:03 localhost python3.9[271799]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:35:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29867 DF PROTO=TCP SPT=58798 DPT=9102 SEQ=3651054409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6678E8E0000000001030307) Feb 1 04:35:04 localhost python3.9[271907]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:35:05 localhost nova_compute[225585]: 2026-02-01 09:35:05.014 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:35:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43890 DF PROTO=TCP SPT=55848 DPT=9102 SEQ=2030436982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667910D0000000001030307) Feb 1 04:35:05 localhost python3.9[272017]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:35:05 localhost sshd[272020]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:35:05 localhost python3.9[272074]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:35:05 localhost nova_compute[225585]: 2026-02-01 09:35:05.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:35:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:35:06 localhost podman[272153]: 2026-02-01 09:35:06.375514686 +0000 UTC m=+0.088892607 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:35:06 localhost podman[272153]: 2026-02-01 09:35:06.385190674 +0000 UTC m=+0.098568575 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:35:06 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:35:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29868 DF PROTO=TCP SPT=58798 DPT=9102 SEQ=3651054409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667968D0000000001030307) Feb 1 04:35:06 localhost python3.9[272195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 1 04:35:06 localhost nova_compute[225585]: 2026-02-01 09:35:06.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:35:07 localhost python3.9[272260]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 1 04:35:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51456 DF PROTO=TCP SPT=43886 DPT=9102 SEQ=3716749661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6679B0D0000000001030307) Feb 1 04:35:07 localhost nova_compute[225585]: 2026-02-01 09:35:07.994 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:35:07 localhost nova_compute[225585]: 2026-02-01 09:35:07.995 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:35:08 localhost python3.9[272370]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Feb 1 04:35:08 localhost nova_compute[225585]: 2026-02-01 09:35:08.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.010 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.011 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.011 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.011 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.011 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:35:09 localhost python3.9[272481]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.436 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.626 225589 WARNING nova.virt.libvirt.driver [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.628 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12895MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.629 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.629 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.695 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.695 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:35:09 localhost nova_compute[225585]: 2026-02-01 09:35:09.709 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:35:10 localhost nova_compute[225585]: 2026-02-01 09:35:10.159 225589 DEBUG oslo_concurrency.processutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:35:10 localhost nova_compute[225585]: 2026-02-01 09:35:10.166 225589 DEBUG nova.compute.provider_tree [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:35:10 localhost nova_compute[225585]: 2026-02-01 09:35:10.182 225589 DEBUG nova.scheduler.client.report [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:35:10 localhost nova_compute[225585]: 2026-02-01 09:35:10.185 225589 DEBUG nova.compute.resource_tracker [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:35:10 localhost nova_compute[225585]: 2026-02-01 09:35:10.185 225589 DEBUG oslo_concurrency.lockutils [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:35:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29869 DF PROTO=TCP SPT=58798 DPT=9102 SEQ=3651054409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667A64D0000000001030307) Feb 1 04:35:10 localhost python3[272634]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False Feb 1 04:35:10 localhost python3[272634]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd",#012 "Digest": "sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:31:38.534497001Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1214548351,#012 "VirtualSize": 1214548351,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",#012 "sha256:6cac2e473d63cf2a9b8ef2ea3f4fbc7fb780c57021c3588efd56da3aa8cf8843",#012 "sha256:927dd86a09392106af537557be80232b7e8ca154daa00857c24fe20f9e550a50"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Feb 1 04:35:11 localhost nova_compute[225585]: 2026-02-01 09:35:11.186 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:35:11 localhost nova_compute[225585]: 2026-02-01 09:35:11.187 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:35:11 localhost nova_compute[225585]: 2026-02-01 09:35:11.187 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:35:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:35:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:35:11 localhost podman[272806]: 2026-02-01 09:35:11.775301664 +0000 UTC m=+0.072226994 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:35:11 localhost podman[272806]: 2026-02-01 09:35:11.784614381 +0000 UTC m=+0.081539751 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:35:11 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:35:11 localhost podman[272805]: 2026-02-01 09:35:11.836465317 +0000 UTC m=+0.135264485 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:35:11 localhost podman[272805]: 2026-02-01 09:35:11.869838263 +0000 UTC m=+0.168637391 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller) Feb 1 04:35:11 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:35:11 localhost python3.9[272804]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:35:11 localhost nova_compute[225585]: 2026-02-01 09:35:11.995 225589 DEBUG oslo_service.periodic_task [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:35:11 localhost nova_compute[225585]: 2026-02-01 09:35:11.996 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:35:11 localhost nova_compute[225585]: 2026-02-01 09:35:11.996 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:35:12 localhost nova_compute[225585]: 2026-02-01 09:35:12.031 225589 DEBUG nova.compute.manager [None req-f5d4d234-046d-40ce-a06c-dfcffd290e3b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:35:13 localhost python3.9[272964]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Feb 1 04:35:13 localhost python3.9[273074]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 1 04:35:15 localhost python3[273184]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False Feb 1 04:35:15 localhost python3[273184]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd",#012 "Digest": "sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:31:38.534497001Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1214548351,#012 "VirtualSize": 1214548351,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",#012 "sha256:6cac2e473d63cf2a9b8ef2ea3f4fbc7fb780c57021c3588efd56da3aa8cf8843",#012 "sha256:927dd86a09392106af537557be80232b7e8ca154daa00857c24fe20f9e550a50"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Feb 1 04:35:16 localhost python3.9[273355]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:35:17 localhost python3.9[273467]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:35:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29870 DF PROTO=TCP SPT=58798 DPT=9102 SEQ=3651054409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667C7140000000001030307) Feb 1 04:35:18 localhost python3.9[273576]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769938517.5381455-3028-115428751625794/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:35:19 localhost python3.9[273631]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:35:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:35:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:35:20 localhost systemd[1]: tmp-crun.VPj4L5.mount: Deactivated successfully. Feb 1 04:35:20 localhost podman[273635]: 2026-02-01 09:35:20.876914574 +0000 UTC m=+0.087059530 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:35:20 localhost podman[273634]: 2026-02-01 09:35:20.924569511 +0000 UTC m=+0.135237664 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, release=1769056855, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, version=9.7, name=ubi9/ubi-minimal, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64) Feb 1 04:35:20 localhost podman[273635]: 2026-02-01 09:35:20.937651093 +0000 UTC m=+0.147796089 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Feb 1 04:35:20 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:35:20 localhost podman[273634]: 2026-02-01 09:35:20.987807566 +0000 UTC m=+0.198475769 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, release=1769056855, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 04:35:20 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:35:22 localhost python3.9[273778]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:35:23 localhost python3.9[273886]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:35:23 localhost python3.9[273994]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 1 04:35:24 localhost python3.9[274104]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 1 04:35:25 localhost systemd-journald[47940]: Field hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 119.2 (397 of 333 items), suggesting rotation. Feb 1 04:35:25 localhost systemd-journald[47940]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 04:35:25 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:35:25 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:35:25 localhost python3.9[274237]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 1 04:35:26 localhost systemd[1]: Stopping nova_compute container... Feb 1 04:35:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:35:26 localhost podman[274254]: 2026-02-01 09:35:26.859204529 +0000 UTC m=+0.077980361 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:35:26 localhost podman[274254]: 2026-02-01 09:35:26.897801407 +0000 UTC m=+0.116577209 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:35:26 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:35:30 localhost podman[236852]: time="2026-02-01T09:35:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:35:30 localhost podman[236852]: @ - - [01/Feb/2026:09:35:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146809 "" "Go-http-client/1.1" Feb 1 04:35:30 localhost podman[236852]: @ - - [01/Feb/2026:09:35:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16189 "" "Go-http-client/1.1" Feb 1 04:35:31 localhost nova_compute[225585]: 2026-02-01 09:35:31.430 225589 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Feb 1 04:35:31 localhost nova_compute[225585]: 2026-02-01 09:35:31.432 225589 DEBUG oslo_concurrency.lockutils [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:35:31 localhost nova_compute[225585]: 2026-02-01 09:35:31.433 225589 DEBUG oslo_concurrency.lockutils [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:35:31 localhost nova_compute[225585]: 2026-02-01 09:35:31.433 225589 DEBUG oslo_concurrency.lockutils [None req-a2410d21-6318-468a-bc6e-9c23e9caa3c1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:35:31 localhost openstack_network_exporter[239388]: ERROR 09:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:35:31 localhost openstack_network_exporter[239388]: Feb 1 04:35:31 localhost openstack_network_exporter[239388]: ERROR 09:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:35:31 localhost openstack_network_exporter[239388]: Feb 1 04:35:31 localhost journal[224673]: End of file while reading data: Input/output error Feb 1 04:35:31 localhost systemd[1]: libpod-6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b.scope: Deactivated successfully. Feb 1 04:35:31 localhost systemd[1]: libpod-6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b.scope: Consumed 16.385s CPU time. Feb 1 04:35:31 localhost podman[274241]: 2026-02-01 09:35:31.779335304 +0000 UTC m=+5.761665395 container died 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0) Feb 1 04:35:31 localhost systemd[1]: tmp-crun.CHK18T.mount: Deactivated successfully. Feb 1 04:35:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b-userdata-shm.mount: Deactivated successfully. Feb 1 04:35:31 localhost podman[274241]: 2026-02-01 09:35:31.9439394 +0000 UTC m=+5.926269461 container cleanup 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute) Feb 1 04:35:31 localhost podman[274241]: nova_compute Feb 1 04:35:32 localhost podman[274300]: error opening file `/run/crun/6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b/status`: No such file or directory Feb 1 04:35:32 localhost podman[274288]: 2026-02-01 09:35:32.043468103 +0000 UTC m=+0.067004873 container cleanup 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20260127, tcib_managed=true) Feb 1 04:35:32 localhost podman[274288]: nova_compute Feb 1 04:35:32 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Feb 1 04:35:32 localhost systemd[1]: Stopped nova_compute container. Feb 1 04:35:32 localhost systemd[1]: Starting nova_compute container... Feb 1 04:35:32 localhost systemd[1]: Started libcrun container. Feb 1 04:35:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 1 04:35:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 1 04:35:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 04:35:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 1 04:35:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54995506782eae95026b9f51b92ada7520fd2a9b50cd4bb5084f3bc717596538/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 1 04:35:32 localhost podman[274302]: 2026-02-01 09:35:32.188954401 +0000 UTC m=+0.114943838 container init 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:35:32 localhost podman[274302]: 2026-02-01 09:35:32.198477794 +0000 UTC m=+0.124467251 container start 6376ec1aa7e36dfa9d6482e2e2c123bbd0b30c6c27b1932547ca6ecc8b480f1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:35:32 localhost podman[274302]: nova_compute Feb 1 04:35:32 localhost nova_compute[274317]: + sudo -E kolla_set_configs Feb 1 04:35:32 localhost systemd[1]: Started nova_compute container. Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Validating config file Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying service configuration files Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /etc/ceph Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Creating directory /etc/ceph Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/ceph Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Writing out command to execute Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:35:32 localhost nova_compute[274317]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 1 04:35:32 localhost nova_compute[274317]: ++ cat /run_command Feb 1 04:35:32 localhost nova_compute[274317]: + CMD=nova-compute Feb 1 04:35:32 localhost nova_compute[274317]: + ARGS= Feb 1 04:35:32 localhost nova_compute[274317]: + sudo kolla_copy_cacerts Feb 1 04:35:32 localhost nova_compute[274317]: + [[ ! -n '' ]] Feb 1 04:35:32 localhost nova_compute[274317]: + . kolla_extend_start Feb 1 04:35:32 localhost nova_compute[274317]: Running command: 'nova-compute' Feb 1 04:35:32 localhost nova_compute[274317]: + echo 'Running command: '\''nova-compute'\''' Feb 1 04:35:32 localhost nova_compute[274317]: + umask 0022 Feb 1 04:35:32 localhost nova_compute[274317]: + exec nova-compute Feb 1 04:35:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65433 DF PROTO=TCP SPT=45760 DPT=9102 SEQ=4111040520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA667FFD10000000001030307) Feb 1 04:35:33 localhost nova_compute[274317]: 2026-02-01 09:35:33.950 274321 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 1 04:35:33 localhost nova_compute[274317]: 2026-02-01 09:35:33.950 274321 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 1 04:35:33 localhost nova_compute[274317]: 2026-02-01 09:35:33.950 274321 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 1 04:35:33 localhost nova_compute[274317]: 2026-02-01 09:35:33.950 274321 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.067 274321 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.091 274321 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.091 274321 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 1 04:35:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65434 DF PROTO=TCP SPT=45760 DPT=9102 SEQ=4111040520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66803CE0000000001030307) Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.548 274321 INFO nova.virt.driver [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.660 274321 INFO nova.compute.provider_config [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.667 274321 DEBUG oslo_concurrency.lockutils [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.667 274321 DEBUG oslo_concurrency.lockutils [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.667 274321 DEBUG oslo_concurrency.lockutils [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.667 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.668 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.668 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.668 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.668 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.668 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.668 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.668 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.668 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.669 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.669 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.669 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.669 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.669 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.669 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.669 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.670 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.670 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.670 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.670 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] console_host = np0005604215.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.670 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.670 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.670 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.670 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.671 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.671 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.671 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.671 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.671 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.671 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.671 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.672 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.672 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.672 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.672 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.672 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.672 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.672 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.673 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] host = np0005604215.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.673 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.673 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.673 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.673 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.673 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.673 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.674 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.674 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.674 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.674 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.674 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.674 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.674 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.674 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.675 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.675 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.675 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.675 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.675 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.675 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.675 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.676 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.676 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.676 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.676 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.676 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.676 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.676 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.677 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.677 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.677 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.677 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.677 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.678 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.678 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.679 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.679 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.679 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.679 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.680 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.680 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.680 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.680 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.680 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.681 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.681 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.681 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.681 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.681 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.681 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.681 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.681 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.682 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.682 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.682 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.682 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.682 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.682 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.682 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.682 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.683 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.683 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.683 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.683 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.683 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.683 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.683 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.683 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.684 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.684 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.684 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.684 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.684 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.684 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.684 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.684 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.685 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.685 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.685 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.685 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.685 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.685 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.685 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.686 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.686 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.686 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.686 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.686 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.686 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.686 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.686 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.687 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.687 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.687 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.687 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.687 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.687 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.687 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.687 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.688 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.688 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.688 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.688 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.688 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.688 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.689 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.689 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.689 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.689 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.689 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.689 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.689 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.690 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.690 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.690 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.690 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.690 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.690 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.690 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.691 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.691 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.691 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.691 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.691 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.691 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.691 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.692 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.692 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.692 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.692 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.692 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.692 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.692 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.693 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.693 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.693 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.693 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.693 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.693 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.693 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.694 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.694 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.694 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.694 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.694 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.694 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.694 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.695 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.695 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.695 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.695 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.695 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.695 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.695 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.695 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.696 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.696 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.696 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.696 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.696 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.696 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.696 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.697 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.697 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.697 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.697 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.697 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.697 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.697 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.698 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.698 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.698 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.698 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.698 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.698 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.698 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.699 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.699 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.699 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.699 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.699 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.699 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.699 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.700 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.700 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.700 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.700 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.700 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.700 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.700 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.701 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.701 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.701 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.701 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.701 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.701 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.701 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.702 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.702 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.702 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.702 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.702 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.702 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.702 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.703 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.703 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.703 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.703 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.703 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.703 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.703 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.703 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.704 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.704 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.704 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.704 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.704 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.704 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.704 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.705 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.705 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.705 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.705 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.705 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.705 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.705 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.706 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.706 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.706 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.706 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.706 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.706 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.706 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.706 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.707 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.707 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.707 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.707 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.707 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.707 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.707 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.708 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.708 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.708 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.708 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.708 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.708 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.708 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.709 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.709 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.709 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.709 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.709 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.709 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.709 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.709 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.710 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.710 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.710 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.710 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.710 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.710 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.710 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.711 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.711 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.711 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.711 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.711 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.711 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.711 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.712 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.712 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.712 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.712 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.712 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.712 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.712 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.713 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.713 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.713 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.713 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.713 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.713 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.713 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.713 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.714 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.714 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.714 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.714 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.714 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.714 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.714 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.715 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.715 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.715 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.715 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.715 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.715 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.715 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.715 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.716 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.716 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.716 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.716 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.716 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.716 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.716 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.717 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.717 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.717 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.717 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.717 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.717 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.717 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.718 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.718 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.718 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.718 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.718 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.718 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.718 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.719 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.719 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.719 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.719 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.719 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.719 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.719 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.720 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.720 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.720 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.720 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.720 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.720 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.720 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.721 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.721 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.721 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.721 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.721 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.721 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.721 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.721 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.722 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.722 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.722 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.722 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.722 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.722 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.722 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.723 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.723 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.723 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.723 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.723 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.723 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.723 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.724 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.724 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.724 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.724 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.724 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.724 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.724 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.724 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.725 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.725 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.725 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.725 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.725 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.725 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.725 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.726 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.726 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.726 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.726 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.726 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.726 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.726 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.726 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.727 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.727 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.727 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.727 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.727 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.727 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.727 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.728 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.728 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.728 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.728 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.728 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.728 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.728 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.728 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.729 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.729 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.729 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.729 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.729 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.729 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.729 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.730 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.730 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.730 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.730 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.730 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.730 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.730 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.731 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.731 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.731 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.731 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.731 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.731 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.731 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.731 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.732 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.732 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.732 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.732 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.732 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.732 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.732 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.733 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.733 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.733 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.733 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.733 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.733 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.733 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.734 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.734 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.734 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.734 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.734 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.734 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.734 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.735 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.735 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.735 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.735 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.735 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.735 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.735 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.735 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.736 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.736 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.736 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.736 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.736 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.736 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.736 274321 WARNING oslo_config.cfg [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 1 04:35:34 localhost nova_compute[274317]: live_migration_uri is deprecated for removal in favor of two other options that Feb 1 04:35:34 localhost nova_compute[274317]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 1 04:35:34 localhost nova_compute[274317]: and ``live_migration_inbound_addr`` respectively. Feb 1 04:35:34 localhost nova_compute[274317]: ). Its value may be silently ignored in the future.#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.737 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.737 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.737 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.737 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.737 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.737 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.738 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.738 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.738 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.738 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.738 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.738 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.738 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.739 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.739 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.739 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.739 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.739 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.739 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rbd_secret_uuid = 33fac0b9-80c7-560f-918a-c92d3021ca1e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.739 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.739 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.740 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.740 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.740 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.740 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.740 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.740 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.740 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.741 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.741 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.741 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.741 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.741 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.741 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.742 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.742 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.742 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.742 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.742 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.742 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.742 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.743 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.743 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.743 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.743 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.743 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.743 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.743 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.743 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.744 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.744 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.744 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.744 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.744 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.744 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.744 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.745 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.745 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.745 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.745 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.745 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.745 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.745 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.746 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.746 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.746 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.746 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.746 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.746 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.746 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.747 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.747 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.747 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.747 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.747 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.747 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.747 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.747 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.748 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.748 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.748 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.748 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.748 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.748 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.748 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.749 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.749 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.749 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.749 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.749 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.749 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.750 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.750 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.750 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.750 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.750 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.750 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.750 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.750 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.751 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.751 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.751 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.751 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.751 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.751 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.751 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.752 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.752 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.752 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.752 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.752 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.752 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.752 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.753 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.753 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.753 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.753 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.753 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.753 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.753 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.754 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.754 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.754 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.754 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.754 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.754 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.754 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.755 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.755 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.755 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.755 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.755 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.755 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.755 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.756 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.756 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.756 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.756 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.756 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.756 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.757 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.757 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.757 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.757 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.757 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.757 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.757 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.758 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.758 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.758 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.758 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.758 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.758 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.758 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.758 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.759 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.759 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.759 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.759 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.759 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.759 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.759 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.760 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.760 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.760 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.760 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.760 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.760 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.760 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.761 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.761 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.761 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.761 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.761 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.761 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.761 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.762 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.762 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.762 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.762 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.762 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.762 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.762 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.763 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.763 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.763 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.763 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.763 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.763 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.763 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.764 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.764 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.764 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.764 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.764 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.764 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.764 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.765 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.765 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.765 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.765 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.765 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.765 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.766 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.766 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.766 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.766 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.766 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.766 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.766 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.766 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.767 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.767 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.767 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.767 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.767 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.767 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.767 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.767 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.768 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.768 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.768 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.768 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.768 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.768 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.768 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.768 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.769 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.769 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.769 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.769 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.769 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.769 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.769 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.770 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.770 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.770 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.770 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.770 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.770 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.770 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.770 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.771 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.771 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.771 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.771 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.771 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.771 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.771 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.772 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.772 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.772 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.772 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.772 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.772 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.772 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.773 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.773 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.773 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.773 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.773 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.773 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.773 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.773 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.774 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.774 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.774 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.774 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.774 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.774 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.774 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.775 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.775 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.775 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.775 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.775 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.775 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.775 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.775 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.776 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.776 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.776 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.776 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.776 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.776 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.776 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.777 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.777 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.777 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.777 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.777 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.777 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.777 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.778 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.778 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.778 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.778 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.778 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.778 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.778 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.778 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.779 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.779 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.779 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.779 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.779 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.779 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.780 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.780 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.780 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.780 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.780 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.780 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.780 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.780 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.781 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.781 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.781 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.781 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.781 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.781 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.781 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.782 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.782 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.782 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.782 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.782 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.782 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.782 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.782 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.783 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.783 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.783 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.783 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.783 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.783 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.783 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.784 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.784 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.784 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.784 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.784 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.784 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.784 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.785 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.785 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.785 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.785 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.785 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.785 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.785 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.786 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.786 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.786 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.786 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.786 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.786 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.786 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.786 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.787 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.787 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.787 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.787 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.787 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.787 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.787 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.788 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.788 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.788 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.788 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.788 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.788 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.789 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.789 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.789 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.789 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.789 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.789 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.789 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.789 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.790 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.790 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.790 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.790 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.790 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.790 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.790 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.791 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.791 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.791 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.791 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.791 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.791 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.791 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.792 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.792 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.792 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.792 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.792 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.792 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.792 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.792 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.793 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.793 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.793 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.793 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.793 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.793 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.793 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.794 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.794 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.794 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.794 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.794 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.794 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.794 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.795 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.795 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.795 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.795 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.795 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.795 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.795 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.795 274321 DEBUG oslo_service.service [None req-d404af50-9067-4cca-9774-28a606849fca - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.796 274321 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.809 274321 INFO nova.virt.node [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Determined node identity d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from /var/lib/nova/compute_id#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.809 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.810 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.810 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.810 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.820 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.822 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.823 274321 INFO nova.virt.libvirt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Connection event '1' reason 'None'#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.828 274321 INFO nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Libvirt host capabilities Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: b72fb799-3472-4728-b6e2-ec98d2bbb61b Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: x86_64 Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v4 Feb 1 04:35:34 localhost nova_compute[274317]: AMD Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: tcp Feb 1 04:35:34 localhost nova_compute[274317]: rdma Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: 16116604 Feb 1 04:35:34 localhost nova_compute[274317]: 4029151 Feb 1 04:35:34 localhost nova_compute[274317]: 0 Feb 1 04:35:34 localhost nova_compute[274317]: 0 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: selinux Feb 1 04:35:34 localhost nova_compute[274317]: 0 Feb 1 04:35:34 localhost nova_compute[274317]: system_u:system_r:svirt_t:s0 Feb 1 04:35:34 localhost nova_compute[274317]: system_u:system_r:svirt_tcg_t:s0 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: dac Feb 1 04:35:34 localhost nova_compute[274317]: 0 Feb 1 04:35:34 localhost nova_compute[274317]: +107:+107 Feb 1 04:35:34 localhost nova_compute[274317]: +107:+107 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: hvm Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: 32 Feb 1 04:35:34 localhost nova_compute[274317]: /usr/libexec/qemu-kvm Feb 1 04:35:34 localhost nova_compute[274317]: pc-i440fx-rhel7.6.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.8.0 Feb 1 04:35:34 localhost nova_compute[274317]: q35 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.6.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.6.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.4.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.5.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.3.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel7.6.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.4.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.2.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.2.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.0.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.0.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.1.0 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: hvm Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: 64 Feb 1 04:35:34 localhost nova_compute[274317]: /usr/libexec/qemu-kvm Feb 1 04:35:34 localhost nova_compute[274317]: pc-i440fx-rhel7.6.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.8.0 Feb 1 04:35:34 localhost nova_compute[274317]: q35 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.6.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.6.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.4.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.5.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.3.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel7.6.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.4.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.2.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.2.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.0.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.0.0 Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel8.1.0 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: #033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.833 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.835 274321 DEBUG nova.virt.libvirt.volume.mount [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.838 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: /usr/libexec/qemu-kvm Feb 1 04:35:34 localhost nova_compute[274317]: kvm Feb 1 04:35:34 localhost nova_compute[274317]: pc-i440fx-rhel7.6.0 Feb 1 04:35:34 localhost nova_compute[274317]: i686 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: rom Feb 1 04:35:34 localhost nova_compute[274317]: pflash Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: yes Feb 1 04:35:34 localhost nova_compute[274317]: no Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: no Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: on Feb 1 04:35:34 localhost nova_compute[274317]: off Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: on Feb 1 04:35:34 localhost nova_compute[274317]: off Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome Feb 1 04:35:34 localhost nova_compute[274317]: AMD Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: 486 Feb 1 04:35:34 localhost nova_compute[274317]: 486-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-noTSX Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-noTSX-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-noTSX Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: ClearwaterForest Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: ClearwaterForest-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Conroe Feb 1 04:35:34 localhost nova_compute[274317]: Conroe-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Cooperlake Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cooperlake-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cooperlake-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Denverton Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Denverton-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Denverton-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Denverton-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Dhyana Feb 1 04:35:34 localhost nova_compute[274317]: Dhyana-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Dhyana-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Genoa Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Genoa-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Genoa-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-IBPB Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Milan Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Milan-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Milan-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Milan-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v4 Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v5 Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Turin Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Turin-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v1 Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v2 Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: GraniteRapids Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: GraniteRapids-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: GraniteRapids-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: GraniteRapids-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-noTSX Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-noTSX-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-noTSX Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v6 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v7 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: IvyBridge Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: IvyBridge-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: IvyBridge-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: IvyBridge-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: KnightsMill Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: KnightsMill-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Nehalem Feb 1 04:35:34 localhost nova_compute[274317]: Nehalem-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Nehalem-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Nehalem-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G1 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G1-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G2 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G2-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G3 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G3-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G4-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G5-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Penryn Feb 1 04:35:34 localhost nova_compute[274317]: Penryn-v1 Feb 1 04:35:34 localhost nova_compute[274317]: SandyBridge Feb 1 04:35:34 localhost nova_compute[274317]: SandyBridge-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: SandyBridge-v1 Feb 1 04:35:34 localhost nova_compute[274317]: SandyBridge-v2 Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SierraForest Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SierraForest-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SierraForest-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SierraForest-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-noTSX-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-noTSX-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Westmere Feb 1 04:35:34 localhost nova_compute[274317]: Westmere-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Westmere-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Westmere-v2 Feb 1 04:35:34 localhost nova_compute[274317]: athlon Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: athlon-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: core2duo Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: core2duo-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: coreduo Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: coreduo-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: kvm32 Feb 1 04:35:34 localhost nova_compute[274317]: kvm32-v1 Feb 1 04:35:34 localhost nova_compute[274317]: kvm64 Feb 1 04:35:34 localhost nova_compute[274317]: kvm64-v1 Feb 1 04:35:34 localhost nova_compute[274317]: n270 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: n270-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: pentium Feb 1 04:35:34 localhost nova_compute[274317]: pentium-v1 Feb 1 04:35:34 localhost nova_compute[274317]: pentium2 Feb 1 04:35:34 localhost nova_compute[274317]: pentium2-v1 Feb 1 04:35:34 localhost nova_compute[274317]: pentium3 Feb 1 04:35:34 localhost nova_compute[274317]: pentium3-v1 Feb 1 04:35:34 localhost nova_compute[274317]: phenom Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: phenom-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: qemu32 Feb 1 04:35:34 localhost nova_compute[274317]: qemu32-v1 Feb 1 04:35:34 localhost nova_compute[274317]: qemu64 Feb 1 04:35:34 localhost nova_compute[274317]: qemu64-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: file Feb 1 04:35:34 localhost nova_compute[274317]: anonymous Feb 1 04:35:34 localhost nova_compute[274317]: memfd Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: disk Feb 1 04:35:34 localhost nova_compute[274317]: cdrom Feb 1 04:35:34 localhost nova_compute[274317]: floppy Feb 1 04:35:34 localhost nova_compute[274317]: lun Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: ide Feb 1 04:35:34 localhost nova_compute[274317]: fdc Feb 1 04:35:34 localhost nova_compute[274317]: scsi Feb 1 04:35:34 localhost nova_compute[274317]: virtio Feb 1 04:35:34 localhost nova_compute[274317]: usb Feb 1 04:35:34 localhost nova_compute[274317]: sata Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: virtio Feb 1 04:35:34 localhost nova_compute[274317]: virtio-transitional Feb 1 04:35:34 localhost nova_compute[274317]: virtio-non-transitional Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: vnc Feb 1 04:35:34 localhost nova_compute[274317]: egl-headless Feb 1 04:35:34 localhost nova_compute[274317]: dbus Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: subsystem Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: default Feb 1 04:35:34 localhost nova_compute[274317]: mandatory Feb 1 04:35:34 localhost nova_compute[274317]: requisite Feb 1 04:35:34 localhost nova_compute[274317]: optional Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: usb Feb 1 04:35:34 localhost nova_compute[274317]: pci Feb 1 04:35:34 localhost nova_compute[274317]: scsi Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: virtio Feb 1 04:35:34 localhost nova_compute[274317]: virtio-transitional Feb 1 04:35:34 localhost nova_compute[274317]: virtio-non-transitional Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: random Feb 1 04:35:34 localhost nova_compute[274317]: egd Feb 1 04:35:34 localhost nova_compute[274317]: builtin Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: path Feb 1 04:35:34 localhost nova_compute[274317]: handle Feb 1 04:35:34 localhost nova_compute[274317]: virtiofs Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: tpm-tis Feb 1 04:35:34 localhost nova_compute[274317]: tpm-crb Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: emulator Feb 1 04:35:34 localhost nova_compute[274317]: external Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: 2.0 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: usb Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: pty Feb 1 04:35:34 localhost nova_compute[274317]: unix Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: qemu Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: builtin Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: default Feb 1 04:35:34 localhost nova_compute[274317]: passt Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: isa Feb 1 04:35:34 localhost nova_compute[274317]: hyperv Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: null Feb 1 04:35:34 localhost nova_compute[274317]: vc Feb 1 04:35:34 localhost nova_compute[274317]: pty Feb 1 04:35:34 localhost nova_compute[274317]: dev Feb 1 04:35:34 localhost nova_compute[274317]: file Feb 1 04:35:34 localhost nova_compute[274317]: pipe Feb 1 04:35:34 localhost nova_compute[274317]: stdio Feb 1 04:35:34 localhost nova_compute[274317]: udp Feb 1 04:35:34 localhost nova_compute[274317]: tcp Feb 1 04:35:34 localhost nova_compute[274317]: unix Feb 1 04:35:34 localhost nova_compute[274317]: qemu-vdagent Feb 1 04:35:34 localhost nova_compute[274317]: dbus Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: relaxed Feb 1 04:35:34 localhost nova_compute[274317]: vapic Feb 1 04:35:34 localhost nova_compute[274317]: spinlocks Feb 1 04:35:34 localhost nova_compute[274317]: vpindex Feb 1 04:35:34 localhost nova_compute[274317]: runtime Feb 1 04:35:34 localhost nova_compute[274317]: synic Feb 1 04:35:34 localhost nova_compute[274317]: stimer Feb 1 04:35:34 localhost nova_compute[274317]: reset Feb 1 04:35:34 localhost nova_compute[274317]: vendor_id Feb 1 04:35:34 localhost nova_compute[274317]: frequencies Feb 1 04:35:34 localhost nova_compute[274317]: reenlightenment Feb 1 04:35:34 localhost nova_compute[274317]: tlbflush Feb 1 04:35:34 localhost nova_compute[274317]: ipi Feb 1 04:35:34 localhost nova_compute[274317]: avic Feb 1 04:35:34 localhost nova_compute[274317]: emsr_bitmap Feb 1 04:35:34 localhost nova_compute[274317]: xmm_input Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: 4095 Feb 1 04:35:34 localhost nova_compute[274317]: on Feb 1 04:35:34 localhost nova_compute[274317]: off Feb 1 04:35:34 localhost nova_compute[274317]: off Feb 1 04:35:34 localhost nova_compute[274317]: Linux KVM Hv Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:35:34 localhost nova_compute[274317]: 2026-02-01 09:35:34.844 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: /usr/libexec/qemu-kvm Feb 1 04:35:34 localhost nova_compute[274317]: kvm Feb 1 04:35:34 localhost nova_compute[274317]: pc-q35-rhel9.8.0 Feb 1 04:35:34 localhost nova_compute[274317]: i686 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: rom Feb 1 04:35:34 localhost nova_compute[274317]: pflash Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: yes Feb 1 04:35:34 localhost nova_compute[274317]: no Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: no Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: on Feb 1 04:35:34 localhost nova_compute[274317]: off Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: on Feb 1 04:35:34 localhost nova_compute[274317]: off Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome Feb 1 04:35:34 localhost nova_compute[274317]: AMD Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: 486 Feb 1 04:35:34 localhost nova_compute[274317]: 486-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-noTSX Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-noTSX-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Broadwell-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-noTSX Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cascadelake-Server-v5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: ClearwaterForest Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: ClearwaterForest-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Conroe Feb 1 04:35:34 localhost nova_compute[274317]: Conroe-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Cooperlake Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cooperlake-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Cooperlake-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Denverton Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Denverton-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Denverton-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Denverton-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Dhyana Feb 1 04:35:34 localhost nova_compute[274317]: Dhyana-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Dhyana-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Genoa Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Genoa-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Genoa-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-IBPB Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Milan Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Milan-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Milan-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Milan-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v4 Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Rome-v5 Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Turin Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-Turin-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v1 Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v2 Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: EPYC-v5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: GraniteRapids Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: GraniteRapids-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: GraniteRapids-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: GraniteRapids-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-noTSX Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-noTSX-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Haswell-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-noTSX Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v6 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Icelake-Server-v7 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: IvyBridge Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: IvyBridge-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: IvyBridge-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: IvyBridge-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: KnightsMill Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: KnightsMill-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Nehalem Feb 1 04:35:34 localhost nova_compute[274317]: Nehalem-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Nehalem-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Nehalem-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G1 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G1-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G2 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G2-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G3 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G3-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G4-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Opteron_G5-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Penryn Feb 1 04:35:34 localhost nova_compute[274317]: Penryn-v1 Feb 1 04:35:34 localhost nova_compute[274317]: SandyBridge Feb 1 04:35:34 localhost nova_compute[274317]: SandyBridge-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: SandyBridge-v1 Feb 1 04:35:34 localhost nova_compute[274317]: SandyBridge-v2 Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SapphireRapids-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SierraForest Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SierraForest-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SierraForest-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: SierraForest-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-noTSX-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Client-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-noTSX-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Skylake-Server-v5 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge-v2 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge-v3 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Snowridge-v4 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Westmere Feb 1 04:35:34 localhost nova_compute[274317]: Westmere-IBRS Feb 1 04:35:34 localhost nova_compute[274317]: Westmere-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Westmere-v2 Feb 1 04:35:34 localhost nova_compute[274317]: athlon Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: athlon-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: core2duo Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: core2duo-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: coreduo Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: coreduo-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: kvm32 Feb 1 04:35:34 localhost nova_compute[274317]: kvm32-v1 Feb 1 04:35:34 localhost nova_compute[274317]: kvm64 Feb 1 04:35:34 localhost nova_compute[274317]: kvm64-v1 Feb 1 04:35:34 localhost nova_compute[274317]: n270 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: n270-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: pentium Feb 1 04:35:34 localhost nova_compute[274317]: pentium-v1 Feb 1 04:35:34 localhost nova_compute[274317]: pentium2 Feb 1 04:35:34 localhost nova_compute[274317]: pentium2-v1 Feb 1 04:35:34 localhost nova_compute[274317]: pentium3 Feb 1 04:35:34 localhost nova_compute[274317]: pentium3-v1 Feb 1 04:35:34 localhost nova_compute[274317]: phenom Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: phenom-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: qemu32 Feb 1 04:35:34 localhost nova_compute[274317]: qemu32-v1 Feb 1 04:35:34 localhost nova_compute[274317]: qemu64 Feb 1 04:35:34 localhost nova_compute[274317]: qemu64-v1 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: file Feb 1 04:35:34 localhost nova_compute[274317]: anonymous Feb 1 04:35:34 localhost nova_compute[274317]: memfd Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: disk Feb 1 04:35:34 localhost nova_compute[274317]: cdrom Feb 1 04:35:34 localhost nova_compute[274317]: floppy Feb 1 04:35:34 localhost nova_compute[274317]: lun Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: fdc Feb 1 04:35:34 localhost nova_compute[274317]: scsi Feb 1 04:35:34 localhost nova_compute[274317]: virtio Feb 1 04:35:34 localhost nova_compute[274317]: usb Feb 1 04:35:34 localhost nova_compute[274317]: sata Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: virtio Feb 1 04:35:34 localhost nova_compute[274317]: virtio-transitional Feb 1 04:35:34 localhost nova_compute[274317]: virtio-non-transitional Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: vnc Feb 1 04:35:34 localhost nova_compute[274317]: egl-headless Feb 1 04:35:34 localhost nova_compute[274317]: dbus Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: subsystem Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: default Feb 1 04:35:34 localhost nova_compute[274317]: mandatory Feb 1 04:35:34 localhost nova_compute[274317]: requisite Feb 1 04:35:34 localhost nova_compute[274317]: optional Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: usb Feb 1 04:35:34 localhost nova_compute[274317]: pci Feb 1 04:35:34 localhost nova_compute[274317]: scsi Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: virtio Feb 1 04:35:34 localhost nova_compute[274317]: virtio-transitional Feb 1 04:35:34 localhost nova_compute[274317]: virtio-non-transitional Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: random Feb 1 04:35:34 localhost nova_compute[274317]: egd Feb 1 04:35:34 localhost nova_compute[274317]: builtin Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: path Feb 1 04:35:34 localhost nova_compute[274317]: handle Feb 1 04:35:34 localhost nova_compute[274317]: virtiofs Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: tpm-tis Feb 1 04:35:34 localhost nova_compute[274317]: tpm-crb Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: emulator Feb 1 04:35:34 localhost nova_compute[274317]: external Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: 2.0 Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: usb Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: pty Feb 1 04:35:34 localhost nova_compute[274317]: unix Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: qemu Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: builtin Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: default Feb 1 04:35:34 localhost nova_compute[274317]: passt Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: isa Feb 1 04:35:34 localhost nova_compute[274317]: hyperv Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: null Feb 1 04:35:34 localhost nova_compute[274317]: vc Feb 1 04:35:34 localhost nova_compute[274317]: pty Feb 1 04:35:34 localhost nova_compute[274317]: dev Feb 1 04:35:34 localhost nova_compute[274317]: file Feb 1 04:35:34 localhost nova_compute[274317]: pipe Feb 1 04:35:34 localhost nova_compute[274317]: stdio Feb 1 04:35:34 localhost nova_compute[274317]: udp Feb 1 04:35:34 localhost nova_compute[274317]: tcp Feb 1 04:35:34 localhost nova_compute[274317]: unix Feb 1 04:35:34 localhost nova_compute[274317]: qemu-vdagent Feb 1 04:35:34 localhost nova_compute[274317]: dbus Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: Feb 1 04:35:34 localhost nova_compute[274317]: relaxed Feb 1 04:35:34 localhost nova_compute[274317]: vapic Feb 1 04:35:34 localhost nova_compute[274317]: spinlocks Feb 1 04:35:34 localhost nova_compute[274317]: vpindex Feb 1 04:35:34 localhost nova_compute[274317]: runtime Feb 1 04:35:34 localhost nova_compute[274317]: synic Feb 1 04:35:34 localhost nova_compute[274317]: stimer Feb 1 04:35:34 localhost nova_compute[274317]: reset Feb 1 04:35:34 localhost nova_compute[274317]: vendor_id Feb 1 04:35:34 localhost nova_compute[274317]: frequencies Feb 1 04:35:34 localhost nova_compute[274317]: reenlightenment Feb 1 04:35:35 localhost nova_compute[274317]: tlbflush Feb 1 04:35:35 localhost nova_compute[274317]: ipi Feb 1 04:35:35 localhost nova_compute[274317]: avic Feb 1 04:35:35 localhost nova_compute[274317]: emsr_bitmap Feb 1 04:35:35 localhost nova_compute[274317]: xmm_input Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: 4095 Feb 1 04:35:35 localhost nova_compute[274317]: on Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: Linux KVM Hv Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:34.918 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:34.924 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: /usr/libexec/qemu-kvm Feb 1 04:35:35 localhost nova_compute[274317]: kvm Feb 1 04:35:35 localhost nova_compute[274317]: pc-i440fx-rhel7.6.0 Feb 1 04:35:35 localhost nova_compute[274317]: x86_64 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: rom Feb 1 04:35:35 localhost nova_compute[274317]: pflash Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: yes Feb 1 04:35:35 localhost nova_compute[274317]: no Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: no Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: on Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: on Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome Feb 1 04:35:35 localhost nova_compute[274317]: AMD Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: 486 Feb 1 04:35:35 localhost nova_compute[274317]: 486-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-noTSX Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-noTSX-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-noTSX Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: ClearwaterForest Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: ClearwaterForest-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Conroe Feb 1 04:35:35 localhost nova_compute[274317]: Conroe-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Cooperlake Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cooperlake-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cooperlake-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Denverton Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Denverton-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Denverton-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Denverton-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Dhyana Feb 1 04:35:35 localhost nova_compute[274317]: Dhyana-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Dhyana-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Genoa Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Genoa-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Genoa-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-IBPB Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Milan Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Milan-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Milan-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Milan-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v4 Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v5 Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Turin Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Turin-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v1 Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v2 Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: GraniteRapids Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: GraniteRapids-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: GraniteRapids-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: GraniteRapids-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-noTSX Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-noTSX-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-noTSX Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v6 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v7 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: IvyBridge Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: IvyBridge-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: IvyBridge-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: IvyBridge-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: KnightsMill Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: KnightsMill-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Nehalem Feb 1 04:35:35 localhost nova_compute[274317]: Nehalem-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Nehalem-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Nehalem-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G1 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G1-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G2 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G2-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G3 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G3-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G4-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G5-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Penryn Feb 1 04:35:35 localhost nova_compute[274317]: Penryn-v1 Feb 1 04:35:35 localhost nova_compute[274317]: SandyBridge Feb 1 04:35:35 localhost nova_compute[274317]: SandyBridge-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: SandyBridge-v1 Feb 1 04:35:35 localhost nova_compute[274317]: SandyBridge-v2 Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SierraForest Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SierraForest-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SierraForest-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SierraForest-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-noTSX-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-noTSX-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Westmere Feb 1 04:35:35 localhost nova_compute[274317]: Westmere-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Westmere-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Westmere-v2 Feb 1 04:35:35 localhost nova_compute[274317]: athlon Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: athlon-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: core2duo Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: core2duo-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: coreduo Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: coreduo-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: kvm32 Feb 1 04:35:35 localhost nova_compute[274317]: kvm32-v1 Feb 1 04:35:35 localhost nova_compute[274317]: kvm64 Feb 1 04:35:35 localhost nova_compute[274317]: kvm64-v1 Feb 1 04:35:35 localhost nova_compute[274317]: n270 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: n270-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: pentium Feb 1 04:35:35 localhost nova_compute[274317]: pentium-v1 Feb 1 04:35:35 localhost nova_compute[274317]: pentium2 Feb 1 04:35:35 localhost nova_compute[274317]: pentium2-v1 Feb 1 04:35:35 localhost nova_compute[274317]: pentium3 Feb 1 04:35:35 localhost nova_compute[274317]: pentium3-v1 Feb 1 04:35:35 localhost nova_compute[274317]: phenom Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: phenom-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: qemu32 Feb 1 04:35:35 localhost nova_compute[274317]: qemu32-v1 Feb 1 04:35:35 localhost nova_compute[274317]: qemu64 Feb 1 04:35:35 localhost nova_compute[274317]: qemu64-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: file Feb 1 04:35:35 localhost nova_compute[274317]: anonymous Feb 1 04:35:35 localhost nova_compute[274317]: memfd Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: disk Feb 1 04:35:35 localhost nova_compute[274317]: cdrom Feb 1 04:35:35 localhost nova_compute[274317]: floppy Feb 1 04:35:35 localhost nova_compute[274317]: lun Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: ide Feb 1 04:35:35 localhost nova_compute[274317]: fdc Feb 1 04:35:35 localhost nova_compute[274317]: scsi Feb 1 04:35:35 localhost nova_compute[274317]: virtio Feb 1 04:35:35 localhost nova_compute[274317]: usb Feb 1 04:35:35 localhost nova_compute[274317]: sata Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: virtio Feb 1 04:35:35 localhost nova_compute[274317]: virtio-transitional Feb 1 04:35:35 localhost nova_compute[274317]: virtio-non-transitional Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: vnc Feb 1 04:35:35 localhost nova_compute[274317]: egl-headless Feb 1 04:35:35 localhost nova_compute[274317]: dbus Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: subsystem Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: default Feb 1 04:35:35 localhost nova_compute[274317]: mandatory Feb 1 04:35:35 localhost nova_compute[274317]: requisite Feb 1 04:35:35 localhost nova_compute[274317]: optional Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: usb Feb 1 04:35:35 localhost nova_compute[274317]: pci Feb 1 04:35:35 localhost nova_compute[274317]: scsi Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: virtio Feb 1 04:35:35 localhost nova_compute[274317]: virtio-transitional Feb 1 04:35:35 localhost nova_compute[274317]: virtio-non-transitional Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: random Feb 1 04:35:35 localhost nova_compute[274317]: egd Feb 1 04:35:35 localhost nova_compute[274317]: builtin Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: path Feb 1 04:35:35 localhost nova_compute[274317]: handle Feb 1 04:35:35 localhost nova_compute[274317]: virtiofs Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: tpm-tis Feb 1 04:35:35 localhost nova_compute[274317]: tpm-crb Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: emulator Feb 1 04:35:35 localhost nova_compute[274317]: external Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: 2.0 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: usb Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: pty Feb 1 04:35:35 localhost nova_compute[274317]: unix Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: qemu Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: builtin Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: default Feb 1 04:35:35 localhost nova_compute[274317]: passt Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: isa Feb 1 04:35:35 localhost nova_compute[274317]: hyperv Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: null Feb 1 04:35:35 localhost nova_compute[274317]: vc Feb 1 04:35:35 localhost nova_compute[274317]: pty Feb 1 04:35:35 localhost nova_compute[274317]: dev Feb 1 04:35:35 localhost nova_compute[274317]: file Feb 1 04:35:35 localhost nova_compute[274317]: pipe Feb 1 04:35:35 localhost nova_compute[274317]: stdio Feb 1 04:35:35 localhost nova_compute[274317]: udp Feb 1 04:35:35 localhost nova_compute[274317]: tcp Feb 1 04:35:35 localhost nova_compute[274317]: unix Feb 1 04:35:35 localhost nova_compute[274317]: qemu-vdagent Feb 1 04:35:35 localhost nova_compute[274317]: dbus Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: relaxed Feb 1 04:35:35 localhost nova_compute[274317]: vapic Feb 1 04:35:35 localhost nova_compute[274317]: spinlocks Feb 1 04:35:35 localhost nova_compute[274317]: vpindex Feb 1 04:35:35 localhost nova_compute[274317]: runtime Feb 1 04:35:35 localhost nova_compute[274317]: synic Feb 1 04:35:35 localhost nova_compute[274317]: stimer Feb 1 04:35:35 localhost nova_compute[274317]: reset Feb 1 04:35:35 localhost nova_compute[274317]: vendor_id Feb 1 04:35:35 localhost nova_compute[274317]: frequencies Feb 1 04:35:35 localhost nova_compute[274317]: reenlightenment Feb 1 04:35:35 localhost nova_compute[274317]: tlbflush Feb 1 04:35:35 localhost nova_compute[274317]: ipi Feb 1 04:35:35 localhost nova_compute[274317]: avic Feb 1 04:35:35 localhost nova_compute[274317]: emsr_bitmap Feb 1 04:35:35 localhost nova_compute[274317]: xmm_input Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: 4095 Feb 1 04:35:35 localhost nova_compute[274317]: on Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: Linux KVM Hv Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:34.991 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: /usr/libexec/qemu-kvm Feb 1 04:35:35 localhost nova_compute[274317]: kvm Feb 1 04:35:35 localhost nova_compute[274317]: pc-q35-rhel9.8.0 Feb 1 04:35:35 localhost nova_compute[274317]: x86_64 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: efi Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 1 04:35:35 localhost nova_compute[274317]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 1 04:35:35 localhost nova_compute[274317]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 1 04:35:35 localhost nova_compute[274317]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: rom Feb 1 04:35:35 localhost nova_compute[274317]: pflash Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: yes Feb 1 04:35:35 localhost nova_compute[274317]: no Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: yes Feb 1 04:35:35 localhost nova_compute[274317]: no Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: on Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: on Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome Feb 1 04:35:35 localhost nova_compute[274317]: AMD Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: 486 Feb 1 04:35:35 localhost nova_compute[274317]: 486-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-noTSX Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-noTSX-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Broadwell-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-noTSX Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cascadelake-Server-v5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: ClearwaterForest Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: ClearwaterForest-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Conroe Feb 1 04:35:35 localhost nova_compute[274317]: Conroe-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Cooperlake Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cooperlake-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Cooperlake-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Denverton Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Denverton-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Denverton-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Denverton-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Dhyana Feb 1 04:35:35 localhost nova_compute[274317]: Dhyana-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Dhyana-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Genoa Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Genoa-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Genoa-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-IBPB Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Milan Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Milan-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Milan-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Milan-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v4 Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Rome-v5 Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Turin Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-Turin-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v1 Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v2 Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: EPYC-v5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: GraniteRapids Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: GraniteRapids-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: GraniteRapids-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: GraniteRapids-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-noTSX Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-noTSX-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Haswell-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-noTSX Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v6 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Icelake-Server-v7 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: IvyBridge Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: IvyBridge-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: IvyBridge-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: IvyBridge-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: KnightsMill Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: KnightsMill-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Nehalem Feb 1 04:35:35 localhost nova_compute[274317]: Nehalem-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Nehalem-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Nehalem-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G1 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G1-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G2 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G2-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G3 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G3-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G4-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Opteron_G5-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Penryn Feb 1 04:35:35 localhost nova_compute[274317]: Penryn-v1 Feb 1 04:35:35 localhost nova_compute[274317]: SandyBridge Feb 1 04:35:35 localhost nova_compute[274317]: SandyBridge-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: SandyBridge-v1 Feb 1 04:35:35 localhost nova_compute[274317]: SandyBridge-v2 Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SapphireRapids-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SierraForest Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SierraForest-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SierraForest-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: SierraForest-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-noTSX-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Client-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-noTSX-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Skylake-Server-v5 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge-v2 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge-v3 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Snowridge-v4 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Westmere Feb 1 04:35:35 localhost nova_compute[274317]: Westmere-IBRS Feb 1 04:35:35 localhost nova_compute[274317]: Westmere-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Westmere-v2 Feb 1 04:35:35 localhost nova_compute[274317]: athlon Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: athlon-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: core2duo Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: core2duo-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: coreduo Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: coreduo-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: kvm32 Feb 1 04:35:35 localhost nova_compute[274317]: kvm32-v1 Feb 1 04:35:35 localhost nova_compute[274317]: kvm64 Feb 1 04:35:35 localhost nova_compute[274317]: kvm64-v1 Feb 1 04:35:35 localhost nova_compute[274317]: n270 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: n270-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: pentium Feb 1 04:35:35 localhost nova_compute[274317]: pentium-v1 Feb 1 04:35:35 localhost nova_compute[274317]: pentium2 Feb 1 04:35:35 localhost nova_compute[274317]: pentium2-v1 Feb 1 04:35:35 localhost nova_compute[274317]: pentium3 Feb 1 04:35:35 localhost nova_compute[274317]: pentium3-v1 Feb 1 04:35:35 localhost nova_compute[274317]: phenom Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: phenom-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: qemu32 Feb 1 04:35:35 localhost nova_compute[274317]: qemu32-v1 Feb 1 04:35:35 localhost nova_compute[274317]: qemu64 Feb 1 04:35:35 localhost nova_compute[274317]: qemu64-v1 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: file Feb 1 04:35:35 localhost nova_compute[274317]: anonymous Feb 1 04:35:35 localhost nova_compute[274317]: memfd Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: disk Feb 1 04:35:35 localhost nova_compute[274317]: cdrom Feb 1 04:35:35 localhost nova_compute[274317]: floppy Feb 1 04:35:35 localhost nova_compute[274317]: lun Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: fdc Feb 1 04:35:35 localhost nova_compute[274317]: scsi Feb 1 04:35:35 localhost nova_compute[274317]: virtio Feb 1 04:35:35 localhost nova_compute[274317]: usb Feb 1 04:35:35 localhost nova_compute[274317]: sata Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: virtio Feb 1 04:35:35 localhost nova_compute[274317]: virtio-transitional Feb 1 04:35:35 localhost nova_compute[274317]: virtio-non-transitional Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: vnc Feb 1 04:35:35 localhost nova_compute[274317]: egl-headless Feb 1 04:35:35 localhost nova_compute[274317]: dbus Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: subsystem Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: default Feb 1 04:35:35 localhost nova_compute[274317]: mandatory Feb 1 04:35:35 localhost nova_compute[274317]: requisite Feb 1 04:35:35 localhost nova_compute[274317]: optional Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: usb Feb 1 04:35:35 localhost nova_compute[274317]: pci Feb 1 04:35:35 localhost nova_compute[274317]: scsi Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: virtio Feb 1 04:35:35 localhost nova_compute[274317]: virtio-transitional Feb 1 04:35:35 localhost nova_compute[274317]: virtio-non-transitional Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: random Feb 1 04:35:35 localhost nova_compute[274317]: egd Feb 1 04:35:35 localhost nova_compute[274317]: builtin Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: path Feb 1 04:35:35 localhost nova_compute[274317]: handle Feb 1 04:35:35 localhost nova_compute[274317]: virtiofs Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: tpm-tis Feb 1 04:35:35 localhost nova_compute[274317]: tpm-crb Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: emulator Feb 1 04:35:35 localhost nova_compute[274317]: external Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: 2.0 Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: usb Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: pty Feb 1 04:35:35 localhost nova_compute[274317]: unix Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: qemu Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: builtin Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: default Feb 1 04:35:35 localhost nova_compute[274317]: passt Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: isa Feb 1 04:35:35 localhost nova_compute[274317]: hyperv Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: null Feb 1 04:35:35 localhost nova_compute[274317]: vc Feb 1 04:35:35 localhost nova_compute[274317]: pty Feb 1 04:35:35 localhost nova_compute[274317]: dev Feb 1 04:35:35 localhost nova_compute[274317]: file Feb 1 04:35:35 localhost nova_compute[274317]: pipe Feb 1 04:35:35 localhost nova_compute[274317]: stdio Feb 1 04:35:35 localhost nova_compute[274317]: udp Feb 1 04:35:35 localhost nova_compute[274317]: tcp Feb 1 04:35:35 localhost nova_compute[274317]: unix Feb 1 04:35:35 localhost nova_compute[274317]: qemu-vdagent Feb 1 04:35:35 localhost nova_compute[274317]: dbus Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: relaxed Feb 1 04:35:35 localhost nova_compute[274317]: vapic Feb 1 04:35:35 localhost nova_compute[274317]: spinlocks Feb 1 04:35:35 localhost nova_compute[274317]: vpindex Feb 1 04:35:35 localhost nova_compute[274317]: runtime Feb 1 04:35:35 localhost nova_compute[274317]: synic Feb 1 04:35:35 localhost nova_compute[274317]: stimer Feb 1 04:35:35 localhost nova_compute[274317]: reset Feb 1 04:35:35 localhost nova_compute[274317]: vendor_id Feb 1 04:35:35 localhost nova_compute[274317]: frequencies Feb 1 04:35:35 localhost nova_compute[274317]: reenlightenment Feb 1 04:35:35 localhost nova_compute[274317]: tlbflush Feb 1 04:35:35 localhost nova_compute[274317]: ipi Feb 1 04:35:35 localhost nova_compute[274317]: avic Feb 1 04:35:35 localhost nova_compute[274317]: emsr_bitmap Feb 1 04:35:35 localhost nova_compute[274317]: xmm_input Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: 4095 Feb 1 04:35:35 localhost nova_compute[274317]: on Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: off Feb 1 04:35:35 localhost nova_compute[274317]: Linux KVM Hv Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: Feb 1 04:35:35 localhost nova_compute[274317]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.053 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.053 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.058 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.058 274321 INFO nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Secure Boot support detected#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.060 274321 INFO nova.virt.libvirt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.060 274321 INFO nova.virt.libvirt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.069 274321 DEBUG nova.virt.libvirt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.087 274321 INFO nova.virt.node [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Determined node identity d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from /var/lib/nova/compute_id#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.102 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Verified node d5eeed9a-e4d0-4244-8d4e-39e5c8263590 matches my host np0005604215.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.143 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.281 274321 DEBUG oslo_concurrency.lockutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.282 274321 DEBUG oslo_concurrency.lockutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.282 274321 DEBUG oslo_concurrency.lockutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.282 274321 DEBUG nova.compute.resource_tracker [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.282 274321 DEBUG oslo_concurrency.processutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:35:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29871 DF PROTO=TCP SPT=58798 DPT=9102 SEQ=3651054409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668070E0000000001030307) Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.774 274321 DEBUG oslo_concurrency.processutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.955 274321 WARNING nova.virt.libvirt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.956 274321 DEBUG nova.compute.resource_tracker [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12906MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.956 274321 DEBUG oslo_concurrency.lockutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:35:35 localhost nova_compute[274317]: 2026-02-01 09:35:35.956 274321 DEBUG oslo_concurrency.lockutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:35:36 localhost python3.9[274572]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.136 274321 DEBUG nova.compute.resource_tracker [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.138 274321 DEBUG nova.compute.resource_tracker [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.197 274321 DEBUG nova.scheduler.client.report [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.224 274321 DEBUG nova.scheduler.client.report [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.225 274321 DEBUG nova.compute.provider_tree [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.242 274321 DEBUG nova.scheduler.client.report [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.275 274321 DEBUG nova.scheduler.client.report [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.299 274321 DEBUG oslo_concurrency.processutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:35:36 localhost systemd[1]: Started libpod-conmon-01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d.scope. Feb 1 04:35:36 localhost systemd[1]: Started libcrun container. Feb 1 04:35:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02df9e1e28dd9ec9663cf49666c784b8876dc545b7721fca6e88de98c1c0890/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Feb 1 04:35:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02df9e1e28dd9ec9663cf49666c784b8876dc545b7721fca6e88de98c1c0890/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Feb 1 04:35:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02df9e1e28dd9ec9663cf49666c784b8876dc545b7721fca6e88de98c1c0890/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 1 04:35:36 localhost podman[274596]: 2026-02-01 09:35:36.370349181 +0000 UTC m=+0.132242071 container init 01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3) Feb 1 04:35:36 localhost podman[274596]: 2026-02-01 09:35:36.379897525 +0000 UTC m=+0.141790415 container start 01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, container_name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Feb 1 04:35:36 localhost python3.9[274572]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Applying nova statedir ownership Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/fc52238ffcbdcb325c6bf3fe6412477fc4bdb6cd9151f39289b74f25e08e0db9 Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/d301d14069645d8c23fee2987984776b3e88a570e1aa96d6cf3e31fa880385fd Feb 1 04:35:36 localhost nova_compute_init[274616]: INFO:nova_statedir:Nova statedir ownership complete Feb 1 04:35:36 localhost systemd[1]: libpod-01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d.scope: Deactivated successfully. Feb 1 04:35:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:35:36 localhost podman[274648]: 2026-02-01 09:35:36.523230416 +0000 UTC m=+0.059893865 container died 01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:35:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65435 DF PROTO=TCP SPT=45760 DPT=9102 SEQ=4111040520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6680BCD0000000001030307) Feb 1 04:35:36 localhost podman[274655]: 2026-02-01 09:35:36.616705103 +0000 UTC m=+0.132482668 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:35:36 localhost podman[274655]: 2026-02-01 09:35:36.628556967 +0000 UTC m=+0.144334452 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:35:36 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:35:36 localhost podman[274648]: 2026-02-01 09:35:36.661180892 +0000 UTC m=+0.197844281 container cleanup 01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:35:36 localhost systemd[1]: libpod-conmon-01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d.scope: Deactivated successfully. Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.763 274321 DEBUG oslo_concurrency.processutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.769 274321 DEBUG nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Feb 1 04:35:36 localhost nova_compute[274317]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.769 274321 INFO nova.virt.libvirt.host [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] kernel doesn't support AMD SEV#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.771 274321 DEBUG nova.compute.provider_tree [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.771 274321 DEBUG nova.virt.libvirt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.803 274321 DEBUG nova.scheduler.client.report [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.883 274321 DEBUG nova.compute.resource_tracker [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.883 274321 DEBUG oslo_concurrency.lockutils [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.884 274321 DEBUG nova.service [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.967 274321 DEBUG nova.service [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Feb 1 04:35:36 localhost nova_compute[274317]: 2026-02-01 09:35:36.968 274321 DEBUG nova.servicegroup.drivers.db [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] DB_Driver: join new ServiceGroup member np0005604215.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Feb 1 04:35:37 localhost systemd[1]: session-59.scope: Deactivated successfully. Feb 1 04:35:37 localhost systemd[1]: session-59.scope: Consumed 1min 15.889s CPU time. Feb 1 04:35:37 localhost systemd-logind[761]: Session 59 logged out. Waiting for processes to exit. Feb 1 04:35:37 localhost systemd-logind[761]: Removed session 59. Feb 1 04:35:37 localhost systemd[1]: var-lib-containers-storage-overlay-a02df9e1e28dd9ec9663cf49666c784b8876dc545b7721fca6e88de98c1c0890-merged.mount: Deactivated successfully. Feb 1 04:35:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01ee0565d74fab69009e84c8e8c677af0f5369f8891e268438df8736d6cfb27d-userdata-shm.mount: Deactivated successfully. Feb 1 04:35:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43891 DF PROTO=TCP SPT=55848 DPT=9102 SEQ=2030436982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6680F0D0000000001030307) Feb 1 04:35:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65436 DF PROTO=TCP SPT=45760 DPT=9102 SEQ=4111040520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6681B8D0000000001030307) Feb 1 04:35:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:35:41.751 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:35:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:35:41.752 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:35:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:35:41.752 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:35:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:35:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:35:42 localhost podman[274715]: 2026-02-01 09:35:42.863972384 +0000 UTC m=+0.078222909 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller) Feb 1 04:35:42 localhost podman[274715]: 2026-02-01 09:35:42.909599469 +0000 UTC m=+0.123849964 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 04:35:42 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:35:42 localhost podman[274716]: 2026-02-01 09:35:42.926367495 +0000 UTC m=+0.135297856 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:35:42 localhost podman[274716]: 2026-02-01 09:35:42.936883578 +0000 UTC m=+0.145813979 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:35:42 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:35:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65437 DF PROTO=TCP SPT=45760 DPT=9102 SEQ=4111040520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6683B0D0000000001030307) Feb 1 04:35:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:35:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:35:51 localhost podman[274763]: 2026-02-01 09:35:51.86902212 +0000 UTC m=+0.083912014 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, name=ubi9/ubi-minimal, release=1769056855) Feb 1 04:35:51 localhost podman[274763]: 2026-02-01 09:35:51.883726532 +0000 UTC m=+0.098616446 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1769056855, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:35:51 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:35:51 localhost podman[274764]: 2026-02-01 09:35:51.966890292 +0000 UTC m=+0.177923757 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:35:51 localhost podman[274764]: 2026-02-01 09:35:51.974679601 +0000 UTC m=+0.185713086 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:35:51 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:35:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:35:54.651 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:35:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:35:54.653 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:35:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:35:57 localhost podman[274798]: 2026-02-01 09:35:57.905143602 +0000 UTC m=+0.081730147 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:35:57 localhost podman[274798]: 2026-02-01 09:35:57.913739207 +0000 UTC m=+0.090325732 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:35:57 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:36:00 localhost podman[236852]: time="2026-02-01T09:36:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:36:00 localhost podman[236852]: @ - - [01/Feb/2026:09:36:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:36:00 localhost podman[236852]: @ - - [01/Feb/2026:09:36:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16316 "" "Go-http-client/1.1" Feb 1 04:36:00 localhost ovn_metadata_agent[158650]: 2026-02-01 09:36:00.656 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:36:01 localhost openstack_network_exporter[239388]: ERROR 09:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:36:01 localhost openstack_network_exporter[239388]: Feb 1 04:36:01 localhost openstack_network_exporter[239388]: ERROR 09:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:36:01 localhost openstack_network_exporter[239388]: Feb 1 04:36:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50868 DF PROTO=TCP SPT=33804 DPT=9102 SEQ=429368433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66875020000000001030307) Feb 1 04:36:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50869 DF PROTO=TCP SPT=33804 DPT=9102 SEQ=429368433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668790D0000000001030307) Feb 1 04:36:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65438 DF PROTO=TCP SPT=45760 DPT=9102 SEQ=4111040520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6687B0D0000000001030307) Feb 1 04:36:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50870 DF PROTO=TCP SPT=33804 DPT=9102 SEQ=429368433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668810D0000000001030307) Feb 1 04:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:36:06 localhost podman[274815]: 2026-02-01 09:36:06.830196186 +0000 UTC m=+0.081783488 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:36:06 localhost podman[274815]: 2026-02-01 09:36:06.842809954 +0000 UTC m=+0.094397236 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:36:06 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:36:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29872 DF PROTO=TCP SPT=58798 DPT=9102 SEQ=3651054409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668850E0000000001030307) Feb 1 04:36:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50871 DF PROTO=TCP SPT=33804 DPT=9102 SEQ=429368433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66890CD0000000001030307) Feb 1 04:36:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:36:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:36:13 localhost podman[274840]: 2026-02-01 09:36:13.87167871 +0000 UTC m=+0.086584026 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:36:13 localhost podman[274840]: 2026-02-01 09:36:13.881679608 +0000 UTC m=+0.096584954 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:36:13 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:36:13 localhost podman[274839]: 2026-02-01 09:36:13.979586291 +0000 UTC m=+0.196561931 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:36:14 localhost podman[274839]: 2026-02-01 09:36:14.057736586 +0000 UTC m=+0.274712176 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3) Feb 1 04:36:14 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:36:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50872 DF PROTO=TCP SPT=33804 DPT=9102 SEQ=429368433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668B10D0000000001030307) Feb 1 04:36:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:36:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:36:22 localhost podman[274889]: 2026-02-01 09:36:22.864159692 +0000 UTC m=+0.074217496 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, architecture=x86_64, release=1769056855, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 1 04:36:22 localhost podman[274889]: 2026-02-01 09:36:22.876539062 +0000 UTC m=+0.086596936 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, vendor=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855) Feb 1 04:36:22 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:36:22 localhost podman[274890]: 2026-02-01 09:36:22.930566276 +0000 UTC m=+0.137412670 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127) Feb 1 04:36:22 localhost podman[274890]: 2026-02-01 09:36:22.959713202 +0000 UTC m=+0.166559626 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:36:22 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:36:25 localhost sshd[274928]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:36:25 localhost nova_compute[274317]: 2026-02-01 09:36:25.970 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:25 localhost nova_compute[274317]: 2026-02-01 09:36:25.988 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:36:28 localhost podman[274930]: 2026-02-01 09:36:28.864183553 +0000 UTC m=+0.079290792 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 1 04:36:28 localhost podman[274930]: 2026-02-01 09:36:28.876634556 +0000 UTC m=+0.091741795 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 1 04:36:28 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:36:30 localhost podman[236852]: time="2026-02-01T09:36:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:36:30 localhost podman[236852]: @ - - [01/Feb/2026:09:36:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:36:30 localhost podman[236852]: @ - - [01/Feb/2026:09:36:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16304 "" "Go-http-client/1.1" Feb 1 04:36:31 localhost openstack_network_exporter[239388]: ERROR 09:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:36:31 localhost openstack_network_exporter[239388]: Feb 1 04:36:31 localhost openstack_network_exporter[239388]: ERROR 09:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:36:31 localhost openstack_network_exporter[239388]: Feb 1 04:36:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51381 DF PROTO=TCP SPT=49496 DPT=9102 SEQ=1413387119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668EA310000000001030307) Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.102 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.102 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.103 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.120 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.120 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.121 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.121 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.122 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.122 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.122 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.123 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.123 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.142 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.143 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.143 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.143 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.144 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:36:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51382 DF PROTO=TCP SPT=49496 DPT=9102 SEQ=1413387119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668EE4E0000000001030307) Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.587 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.819 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.821 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12874MB free_disk=41.8370475769043GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.822 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.822 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.914 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.914 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:36:34 localhost nova_compute[274317]: 2026-02-01 09:36:34.941 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:36:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50873 DF PROTO=TCP SPT=33804 DPT=9102 SEQ=429368433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668F10D0000000001030307) Feb 1 04:36:35 localhost nova_compute[274317]: 2026-02-01 09:36:35.392 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:36:35 localhost nova_compute[274317]: 2026-02-01 09:36:35.398 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:36:35 localhost nova_compute[274317]: 2026-02-01 09:36:35.422 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:36:35 localhost nova_compute[274317]: 2026-02-01 09:36:35.425 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:36:35 localhost nova_compute[274317]: 2026-02-01 09:36:35.425 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:36:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51383 DF PROTO=TCP SPT=49496 DPT=9102 SEQ=1413387119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668F64D0000000001030307) Feb 1 04:36:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65439 DF PROTO=TCP SPT=45760 DPT=9102 SEQ=4111040520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA668F90D0000000001030307) Feb 1 04:36:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:36:37 localhost podman[275081]: 2026-02-01 09:36:37.871763897 +0000 UTC m=+0.082526011 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:36:37 localhost podman[275081]: 2026-02-01 09:36:37.879094233 +0000 UTC m=+0.089856327 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:36:37 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:36:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51384 DF PROTO=TCP SPT=49496 DPT=9102 SEQ=1413387119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669060D0000000001030307) Feb 1 04:36:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:36:41.752 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:36:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:36:41.752 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:36:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:36:41.752 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:36:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:36:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:36:44 localhost systemd[1]: tmp-crun.44i2Q3.mount: Deactivated successfully. Feb 1 04:36:44 localhost podman[275105]: 2026-02-01 09:36:44.87193157 +0000 UTC m=+0.086417121 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:36:44 localhost podman[275106]: 2026-02-01 09:36:44.917579654 +0000 UTC m=+0.129804045 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:36:44 localhost podman[275106]: 2026-02-01 09:36:44.927654385 +0000 UTC m=+0.139878766 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:36:44 localhost podman[275105]: 2026-02-01 09:36:44.937723904 +0000 UTC m=+0.152209485 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:36:44 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:36:44 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:36:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51385 DF PROTO=TCP SPT=49496 DPT=9102 SEQ=1413387119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669270D0000000001030307) Feb 1 04:36:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:36:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:36:53 localhost systemd[1]: tmp-crun.Y9AB3S.mount: Deactivated successfully. Feb 1 04:36:53 localhost podman[275156]: 2026-02-01 09:36:53.881403354 +0000 UTC m=+0.090429384 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 1 04:36:53 localhost podman[275156]: 2026-02-01 09:36:53.915623927 +0000 UTC m=+0.124649917 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:36:53 localhost systemd[1]: tmp-crun.HtWLGq.mount: Deactivated successfully. Feb 1 04:36:53 localhost podman[275155]: 2026-02-01 09:36:53.930877827 +0000 UTC m=+0.142468606 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, version=9.7, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, vendor=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 04:36:53 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:36:53 localhost podman[275155]: 2026-02-01 09:36:53.943524685 +0000 UTC m=+0.155115454 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, release=1769056855, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, version=9.7) Feb 1 04:36:53 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:36:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:36:59 localhost podman[275193]: 2026-02-01 09:36:59.906875288 +0000 UTC m=+0.123336746 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:36:59 localhost podman[275193]: 2026-02-01 09:36:59.918712483 +0000 UTC m=+0.135173901 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:36:59 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:37:00 localhost podman[236852]: time="2026-02-01T09:37:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:37:00 localhost podman[236852]: @ - - [01/Feb/2026:09:37:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:37:00 localhost podman[236852]: @ - - [01/Feb/2026:09:37:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16316 "" "Go-http-client/1.1" Feb 1 04:37:01 localhost openstack_network_exporter[239388]: ERROR 09:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:37:01 localhost openstack_network_exporter[239388]: Feb 1 04:37:01 localhost openstack_network_exporter[239388]: ERROR 09:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:37:01 localhost openstack_network_exporter[239388]: Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.403 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:37:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:37:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28220 DF PROTO=TCP SPT=46326 DPT=9102 SEQ=2536956989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6695F610000000001030307) Feb 1 04:37:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28221 DF PROTO=TCP SPT=46326 DPT=9102 SEQ=2536956989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669634D0000000001030307) Feb 1 04:37:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51386 DF PROTO=TCP SPT=49496 DPT=9102 SEQ=1413387119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669670D0000000001030307) Feb 1 04:37:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28222 DF PROTO=TCP SPT=46326 DPT=9102 SEQ=2536956989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6696B4D0000000001030307) Feb 1 04:37:06 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Feb 1 04:37:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50874 DF PROTO=TCP SPT=33804 DPT=9102 SEQ=429368433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6696F0E0000000001030307) Feb 1 04:37:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:37:08 localhost podman[275213]: 2026-02-01 09:37:08.862914991 +0000 UTC m=+0.078334959 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:37:08 localhost podman[275213]: 2026-02-01 09:37:08.870237107 +0000 UTC m=+0.085657105 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:37:08 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:37:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28223 DF PROTO=TCP SPT=46326 DPT=9102 SEQ=2536956989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6697B0D0000000001030307) Feb 1 04:37:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:37:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:37:15 localhost podman[275236]: 2026-02-01 09:37:15.867601241 +0000 UTC m=+0.080956190 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:37:15 localhost systemd[1]: tmp-crun.up0L2I.mount: Deactivated successfully. Feb 1 04:37:15 localhost podman[275237]: 2026-02-01 09:37:15.921477785 +0000 UTC m=+0.132942655 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:37:15 localhost podman[275236]: 2026-02-01 09:37:15.928264404 +0000 UTC m=+0.141619353 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 1 04:37:15 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:37:15 localhost podman[275237]: 2026-02-01 09:37:15.97934351 +0000 UTC m=+0.190808390 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:37:15 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:37:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28224 DF PROTO=TCP SPT=46326 DPT=9102 SEQ=2536956989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA6699B0D0000000001030307) Feb 1 04:37:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:37:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:37:24 localhost podman[275284]: 2026-02-01 09:37:24.865717462 +0000 UTC m=+0.076971257 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 04:37:24 localhost podman[275284]: 2026-02-01 09:37:24.874575925 +0000 UTC m=+0.085829700 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Feb 1 04:37:24 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:37:24 localhost podman[275283]: 2026-02-01 09:37:24.91748835 +0000 UTC m=+0.131366516 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., release=1769056855, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z) Feb 1 04:37:24 localhost podman[275283]: 2026-02-01 09:37:24.958713783 +0000 UTC m=+0.172591959 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-01-22T05:09:47Z, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 1 04:37:24 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:37:30 localhost podman[236852]: time="2026-02-01T09:37:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:37:30 localhost podman[236852]: @ - - [01/Feb/2026:09:37:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:37:30 localhost podman[236852]: @ - - [01/Feb/2026:09:37:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16318 "" "Go-http-client/1.1" Feb 1 04:37:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:37:30 localhost podman[275321]: 2026-02-01 09:37:30.865283918 +0000 UTC m=+0.077139993 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute) Feb 1 04:37:30 localhost podman[275321]: 2026-02-01 09:37:30.899164894 +0000 UTC m=+0.111020999 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 1 04:37:30 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:37:31 localhost openstack_network_exporter[239388]: ERROR 09:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:37:31 localhost openstack_network_exporter[239388]: Feb 1 04:37:31 localhost openstack_network_exporter[239388]: ERROR 09:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:37:31 localhost openstack_network_exporter[239388]: Feb 1 04:37:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32298 DF PROTO=TCP SPT=56726 DPT=9102 SEQ=3542751615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669D4910000000001030307) Feb 1 04:37:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32299 DF PROTO=TCP SPT=56726 DPT=9102 SEQ=3542751615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669D88D0000000001030307) Feb 1 04:37:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28225 DF PROTO=TCP SPT=46326 DPT=9102 SEQ=2536956989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669DB0D0000000001030307) Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.418 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.440 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.441 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.441 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.457 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.457 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.457 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.458 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.458 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.459 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.480 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.481 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.481 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.482 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.482 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:37:35 localhost nova_compute[274317]: 2026-02-01 09:37:35.923 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.096 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.097 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12889MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.098 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.098 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.190 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.190 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.212 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:37:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32300 DF PROTO=TCP SPT=56726 DPT=9102 SEQ=3542751615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669E08E0000000001030307) Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.667 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.673 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.695 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.698 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:37:36 localhost nova_compute[274317]: 2026-02-01 09:37:36.698 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:37:37 localhost nova_compute[274317]: 2026-02-01 09:37:37.339 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:37 localhost nova_compute[274317]: 2026-02-01 09:37:37.340 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:37 localhost nova_compute[274317]: 2026-02-01 09:37:37.340 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:37:37 localhost nova_compute[274317]: 2026-02-01 09:37:37.340 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:37:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51387 DF PROTO=TCP SPT=49496 DPT=9102 SEQ=1413387119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669E50D0000000001030307) Feb 1 04:37:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:37:39 localhost podman[275470]: 2026-02-01 09:37:39.8656687 +0000 UTC m=+0.080104925 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:37:39 localhost podman[275470]: 2026-02-01 09:37:39.876177594 +0000 UTC m=+0.090613829 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:37:39 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:37:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32301 DF PROTO=TCP SPT=56726 DPT=9102 SEQ=3542751615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA669F04E0000000001030307) Feb 1 04:37:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:37:41.752 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:37:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:37:41.753 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:37:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:37:41.753 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:37:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:37:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:37:46 localhost systemd[1]: tmp-crun.pnIEZP.mount: Deactivated successfully. Feb 1 04:37:46 localhost podman[275493]: 2026-02-01 09:37:46.917596118 +0000 UTC m=+0.126075502 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true) Feb 1 04:37:46 localhost systemd[1]: tmp-crun.SfGiBk.mount: Deactivated successfully. Feb 1 04:37:46 localhost podman[275494]: 2026-02-01 09:37:46.932333293 +0000 UTC m=+0.137806175 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:37:46 localhost podman[275494]: 2026-02-01 09:37:46.94810399 +0000 UTC m=+0.153576882 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:37:46 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:37:46 localhost podman[275493]: 2026-02-01 09:37:46.990381545 +0000 UTC m=+0.198860919 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:37:47 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:37:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32302 DF PROTO=TCP SPT=56726 DPT=9102 SEQ=3542751615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66A110D0000000001030307) Feb 1 04:37:49 localhost sshd[275542]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:37:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:37:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:37:56 localhost podman[275544]: 2026-02-01 09:37:56.389920637 +0000 UTC m=+0.082230529 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:37:56 localhost podman[275544]: 2026-02-01 09:37:56.405690924 +0000 UTC m=+0.098000786 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-type=git, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, release=1769056855, io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter) Feb 1 04:37:56 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:37:56 localhost systemd[1]: tmp-crun.m3mhsb.mount: Deactivated successfully. Feb 1 04:37:56 localhost podman[275545]: 2026-02-01 09:37:56.490212902 +0000 UTC m=+0.180123930 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:37:56 localhost podman[275545]: 2026-02-01 09:37:56.520737385 +0000 UTC m=+0.210648413 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 1 04:37:56 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:38:00 localhost podman[236852]: time="2026-02-01T09:38:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:38:00 localhost podman[236852]: @ - - [01/Feb/2026:09:38:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:38:00 localhost podman[236852]: @ - - [01/Feb/2026:09:38:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16313 "" "Go-http-client/1.1" Feb 1 04:38:01 localhost openstack_network_exporter[239388]: ERROR 09:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:38:01 localhost openstack_network_exporter[239388]: Feb 1 04:38:01 localhost openstack_network_exporter[239388]: ERROR 09:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:38:01 localhost openstack_network_exporter[239388]: Feb 1 04:38:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:38:01 localhost podman[275582]: 2026-02-01 09:38:01.869092719 +0000 UTC m=+0.084114968 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:38:01 localhost podman[275582]: 2026-02-01 09:38:01.904813112 +0000 UTC m=+0.119835291 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:38:01 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:38:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20001 DF PROTO=TCP SPT=56740 DPT=9102 SEQ=1505691368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66A49C10000000001030307) Feb 1 04:38:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20002 DF PROTO=TCP SPT=56740 DPT=9102 SEQ=1505691368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66A4DCE0000000001030307) Feb 1 04:38:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32303 DF PROTO=TCP SPT=56726 DPT=9102 SEQ=3542751615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66A510D0000000001030307) Feb 1 04:38:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20003 DF PROTO=TCP SPT=56740 DPT=9102 SEQ=1505691368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66A55CD0000000001030307) Feb 1 04:38:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28226 DF PROTO=TCP SPT=46326 DPT=9102 SEQ=2536956989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66A590D0000000001030307) Feb 1 04:38:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20004 DF PROTO=TCP SPT=56740 DPT=9102 SEQ=1505691368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66A658D0000000001030307) Feb 1 04:38:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:38:10 localhost podman[275600]: 2026-02-01 09:38:10.863975142 +0000 UTC m=+0.079777923 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:38:10 localhost podman[275600]: 2026-02-01 09:38:10.871604488 +0000 UTC m=+0.087407259 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:38:10 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:38:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:38:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:38:17 localhost podman[275622]: 2026-02-01 09:38:17.86277902 +0000 UTC m=+0.080285359 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 1 04:38:17 localhost podman[275622]: 2026-02-01 09:38:17.919798051 +0000 UTC m=+0.137304390 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller) Feb 1 04:38:17 localhost podman[275623]: 2026-02-01 09:38:17.931379909 +0000 UTC m=+0.145003759 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:38:17 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:38:17 localhost podman[275623]: 2026-02-01 09:38:17.942739059 +0000 UTC m=+0.156362919 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:38:17 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:38:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20005 DF PROTO=TCP SPT=56740 DPT=9102 SEQ=1505691368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66A850D0000000001030307) Feb 1 04:38:20 localhost sshd[275669]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:38:21 localhost systemd-logind[761]: New session 61 of user zuul. Feb 1 04:38:21 localhost systemd[1]: Started Session 61 of User zuul. Feb 1 04:38:21 localhost python3[275691]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:38:21 localhost subscription-manager[275692]: Unregistered machine with identity: 228c691b-7b73-45e5-afcd-2aea3d003268 Feb 1 04:38:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:38:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:38:26 localhost podman[275695]: 2026-02-01 09:38:26.875177164 +0000 UTC m=+0.084982386 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Feb 1 04:38:26 localhost podman[275695]: 2026-02-01 09:38:26.910696499 +0000 UTC m=+0.120501731 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:38:26 localhost podman[275694]: 2026-02-01 09:38:26.922116752 +0000 UTC m=+0.133916525 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-01-22T05:09:47Z, architecture=x86_64, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, release=1769056855, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9/ubi-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container) Feb 1 04:38:26 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:38:26 localhost podman[275694]: 2026-02-01 09:38:26.935192946 +0000 UTC m=+0.146992749 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, vcs-type=git, release=1769056855, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9) Feb 1 04:38:26 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:38:30 localhost podman[236852]: time="2026-02-01T09:38:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:38:30 localhost podman[236852]: @ - - [01/Feb/2026:09:38:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:38:30 localhost podman[236852]: @ - - [01/Feb/2026:09:38:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16321 "" "Go-http-client/1.1" Feb 1 04:38:31 localhost openstack_network_exporter[239388]: ERROR 09:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:38:31 localhost openstack_network_exporter[239388]: Feb 1 04:38:31 localhost openstack_network_exporter[239388]: ERROR 09:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:38:31 localhost openstack_network_exporter[239388]: Feb 1 04:38:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:38:32 localhost systemd[1]: tmp-crun.17CGPm.mount: Deactivated successfully. Feb 1 04:38:32 localhost podman[275734]: 2026-02-01 09:38:32.901458444 +0000 UTC m=+0.117072355 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 1 04:38:32 localhost podman[275734]: 2026-02-01 09:38:32.939039644 +0000 UTC m=+0.154653515 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 1 04:38:32 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:38:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22243 DF PROTO=TCP SPT=42806 DPT=9102 SEQ=3675944437 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66ABEF10000000001030307) Feb 1 04:38:34 localhost nova_compute[274317]: 2026-02-01 09:38:34.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:38:34 localhost nova_compute[274317]: 2026-02-01 09:38:34.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:38:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22244 DF PROTO=TCP SPT=42806 DPT=9102 SEQ=3675944437 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66AC30D0000000001030307) Feb 1 04:38:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20006 DF PROTO=TCP SPT=56740 DPT=9102 SEQ=1505691368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66AC50E0000000001030307) Feb 1 04:38:35 localhost nova_compute[274317]: 2026-02-01 09:38:35.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:38:35 localhost nova_compute[274317]: 2026-02-01 09:38:35.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:38:35 localhost nova_compute[274317]: 2026-02-01 09:38:35.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:38:35 localhost nova_compute[274317]: 2026-02-01 09:38:35.121 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:38:36 localhost nova_compute[274317]: 2026-02-01 09:38:36.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:38:36 localhost nova_compute[274317]: 2026-02-01 09:38:36.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:38:36 localhost nova_compute[274317]: 2026-02-01 09:38:36.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:38:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22245 DF PROTO=TCP SPT=42806 DPT=9102 SEQ=3675944437 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66ACB0D0000000001030307) Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.134 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.134 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.135 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.135 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.136 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.594 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:38:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32304 DF PROTO=TCP SPT=56726 DPT=9102 SEQ=3542751615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66ACF0E0000000001030307) Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.802 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.804 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12894MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.805 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.805 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.888 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.888 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:38:37 localhost nova_compute[274317]: 2026-02-01 09:38:37.914 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:38:38 localhost nova_compute[274317]: 2026-02-01 09:38:38.421 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:38:38 localhost nova_compute[274317]: 2026-02-01 09:38:38.428 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:38:38 localhost nova_compute[274317]: 2026-02-01 09:38:38.447 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:38:38 localhost systemd[1]: tmp-crun.B2HWan.mount: Deactivated successfully. Feb 1 04:38:38 localhost nova_compute[274317]: 2026-02-01 09:38:38.450 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:38:38 localhost nova_compute[274317]: 2026-02-01 09:38:38.450 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:38:38 localhost podman[275903]: 2026-02-01 09:38:38.456571862 +0000 UTC m=+0.097696547 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhceph, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:38:38 localhost podman[275903]: 2026-02-01 09:38:38.590714172 +0000 UTC m=+0.231838817 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_BRANCH=main, ceph=True, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, vcs-type=git) Feb 1 04:38:39 localhost nova_compute[274317]: 2026-02-01 09:38:39.451 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:38:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22246 DF PROTO=TCP SPT=42806 DPT=9102 SEQ=3675944437 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66ADACD0000000001030307) Feb 1 04:38:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:38:41.753 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:38:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:38:41.754 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:38:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:38:41.754 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:38:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:38:41 localhost podman[276056]: 2026-02-01 09:38:41.877020209 +0000 UTC m=+0.083749036 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:38:41 localhost podman[276056]: 2026-02-01 09:38:41.885001935 +0000 UTC m=+0.091730792 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:38:41 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:38:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:38:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:38:48 localhost podman[276081]: 2026-02-01 09:38:48.881940986 +0000 UTC m=+0.093805536 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Feb 1 04:38:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:39:49:6a MACDST=fa:16:3e:17:45:d1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22247 DF PROTO=TCP SPT=42806 DPT=9102 SEQ=3675944437 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AA66AFB0D0000000001030307) Feb 1 04:38:48 localhost podman[276081]: 2026-02-01 09:38:48.922699295 +0000 UTC m=+0.134563615 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3) Feb 1 04:38:48 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:38:48 localhost podman[276082]: 2026-02-01 09:38:48.928616307 +0000 UTC m=+0.137493515 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:38:49 localhost podman[276082]: 2026-02-01 09:38:49.01165337 +0000 UTC m=+0.220530528 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:38:49 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:38:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:38:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:38:57 localhost podman[276181]: 2026-02-01 09:38:57.869924306 +0000 UTC m=+0.085745028 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-01-22T05:09:47Z, release=1769056855, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.) Feb 1 04:38:57 localhost podman[276181]: 2026-02-01 09:38:57.883512925 +0000 UTC m=+0.099333617 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, build-date=2026-01-22T05:09:47Z, version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container) Feb 1 04:38:57 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:38:57 localhost podman[276182]: 2026-02-01 09:38:57.97050761 +0000 UTC m=+0.181712279 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Feb 1 04:38:58 localhost podman[276182]: 2026-02-01 09:38:58.005640895 +0000 UTC m=+0.216845554 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 1 04:38:58 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:38:59 localhost sshd[276219]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:38:59 localhost systemd[1]: Created slice User Slice of UID 1003. Feb 1 04:38:59 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Feb 1 04:38:59 localhost systemd-logind[761]: New session 62 of user tripleo-admin. Feb 1 04:38:59 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Feb 1 04:38:59 localhost systemd[1]: Starting User Manager for UID 1003... Feb 1 04:38:59 localhost systemd[276223]: Queued start job for default target Main User Target. Feb 1 04:38:59 localhost systemd[276223]: Created slice User Application Slice. Feb 1 04:38:59 localhost systemd[276223]: Started Mark boot as successful after the user session has run 2 minutes. Feb 1 04:38:59 localhost systemd[276223]: Started Daily Cleanup of User's Temporary Directories. Feb 1 04:38:59 localhost systemd[276223]: Reached target Paths. Feb 1 04:38:59 localhost systemd[276223]: Reached target Timers. Feb 1 04:38:59 localhost systemd[276223]: Starting D-Bus User Message Bus Socket... Feb 1 04:38:59 localhost systemd[276223]: Starting Create User's Volatile Files and Directories... Feb 1 04:38:59 localhost systemd[276223]: Listening on D-Bus User Message Bus Socket. Feb 1 04:38:59 localhost systemd[276223]: Reached target Sockets. Feb 1 04:38:59 localhost systemd[276223]: Finished Create User's Volatile Files and Directories. Feb 1 04:38:59 localhost systemd[276223]: Reached target Basic System. Feb 1 04:38:59 localhost systemd[276223]: Reached target Main User Target. Feb 1 04:38:59 localhost systemd[276223]: Startup finished in 149ms. Feb 1 04:38:59 localhost systemd[1]: Started User Manager for UID 1003. Feb 1 04:38:59 localhost systemd[1]: Started Session 62 of User tripleo-admin. Feb 1 04:39:00 localhost podman[236852]: time="2026-02-01T09:39:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:39:00 localhost podman[236852]: @ - - [01/Feb/2026:09:39:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146808 "" "Go-http-client/1.1" Feb 1 04:39:00 localhost podman[236852]: @ - - [01/Feb/2026:09:39:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16321 "" "Go-http-client/1.1" Feb 1 04:39:00 localhost python3[276366]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)#012add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard (8443)#012add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana (3100)#012add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus (9092)#012add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw (8080)#012add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon (6789, 3300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds (6800-7300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr (6800-7300, 8444)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs (2049, 12049)#012add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"#012# 123 ceph_dashboard (9090, 9094, 9283)#012add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:39:00 localhost systemd-journald[47940]: Field hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 80.5 (268 of 333 items), suggesting rotation. Feb 1 04:39:00 localhost systemd-journald[47940]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 04:39:00 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:39:00 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 04:39:01 localhost python3[276511]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 1 04:39:01 localhost openstack_network_exporter[239388]: ERROR 09:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:39:01 localhost openstack_network_exporter[239388]: Feb 1 04:39:01 localhost openstack_network_exporter[239388]: ERROR 09:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:39:01 localhost openstack_network_exporter[239388]: Feb 1 04:39:01 localhost systemd[1]: Stopping Netfilter Tables... Feb 1 04:39:01 localhost systemd[1]: nftables.service: Deactivated successfully. Feb 1 04:39:01 localhost systemd[1]: Stopped Netfilter Tables. Feb 1 04:39:01 localhost systemd[1]: Starting Netfilter Tables... Feb 1 04:39:01 localhost systemd[1]: Finished Netfilter Tables. Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:39:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:39:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:39:03 localhost podman[276535]: 2026-02-01 09:39:03.884515065 +0000 UTC m=+0.091348821 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 1 04:39:03 localhost podman[276535]: 2026-02-01 09:39:03.920385962 +0000 UTC m=+0.127219718 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 1 04:39:03 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:39:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:39:12 localhost podman[276681]: 2026-02-01 09:39:12.329560913 +0000 UTC m=+0.108790672 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:39:12 localhost podman[276681]: 2026-02-01 09:39:12.365960288 +0000 UTC m=+0.145190087 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:39:12 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:39:12 localhost podman[276768]: Feb 1 04:39:12 localhost podman[276768]: 2026-02-01 09:39:12.955609222 +0000 UTC m=+0.087021243 container create 0b5bd4915eb063cd625a2e0f2b0ef572f82700514af29d46155b41174cefc25f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_cohen, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1764794109, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:39:13 localhost systemd[1]: Started libpod-conmon-0b5bd4915eb063cd625a2e0f2b0ef572f82700514af29d46155b41174cefc25f.scope. Feb 1 04:39:13 localhost systemd[1]: Started libcrun container. Feb 1 04:39:13 localhost podman[276768]: 2026-02-01 09:39:12.919370893 +0000 UTC m=+0.050782944 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:39:13 localhost podman[276768]: 2026-02-01 09:39:13.031795467 +0000 UTC m=+0.163207498 container init 0b5bd4915eb063cd625a2e0f2b0ef572f82700514af29d46155b41174cefc25f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_cohen, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-12-08T17:28:53Z, release=1764794109, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-type=git, GIT_BRANCH=main) Feb 1 04:39:13 localhost podman[276768]: 2026-02-01 09:39:13.043114581 +0000 UTC m=+0.174526612 container start 0b5bd4915eb063cd625a2e0f2b0ef572f82700514af29d46155b41174cefc25f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_cohen, GIT_BRANCH=main, ceph=True, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:39:13 localhost podman[276768]: 2026-02-01 09:39:13.043510254 +0000 UTC m=+0.174922275 container attach 0b5bd4915eb063cd625a2e0f2b0ef572f82700514af29d46155b41174cefc25f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_cohen, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, distribution-scope=public, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, vcs-type=git, version=7) Feb 1 04:39:13 localhost blissful_cohen[276783]: 167 167 Feb 1 04:39:13 localhost systemd[1]: libpod-0b5bd4915eb063cd625a2e0f2b0ef572f82700514af29d46155b41174cefc25f.scope: Deactivated successfully. Feb 1 04:39:13 localhost podman[276768]: 2026-02-01 09:39:13.046787005 +0000 UTC m=+0.178199066 container died 0b5bd4915eb063cd625a2e0f2b0ef572f82700514af29d46155b41174cefc25f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_cohen, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1764794109, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:39:13 localhost podman[276788]: 2026-02-01 09:39:13.160005847 +0000 UTC m=+0.104976043 container remove 0b5bd4915eb063cd625a2e0f2b0ef572f82700514af29d46155b41174cefc25f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_cohen, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph) Feb 1 04:39:13 localhost systemd[1]: libpod-conmon-0b5bd4915eb063cd625a2e0f2b0ef572f82700514af29d46155b41174cefc25f.scope: Deactivated successfully. Feb 1 04:39:13 localhost systemd[1]: Reloading. Feb 1 04:39:13 localhost systemd-rc-local-generator[276827]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:39:13 localhost systemd-sysv-generator[276831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: var-lib-containers-storage-overlay-56125ce038aae7582b72251e3f01def7b65b146c40eacb0a4e07adc2a2360160-merged.mount: Deactivated successfully. Feb 1 04:39:13 localhost systemd[1]: Reloading. Feb 1 04:39:13 localhost systemd-rc-local-generator[276872]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:39:13 localhost systemd-sysv-generator[276875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:13 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:39:14 localhost systemd[1]: Starting Ceph mds.mds.np0005604215.rwvxvg for 33fac0b9-80c7-560f-918a-c92d3021ca1e... Feb 1 04:39:14 localhost podman[276934]: Feb 1 04:39:14 localhost podman[276934]: 2026-02-01 09:39:14.362128918 +0000 UTC m=+0.083522924 container create 80bcdc70795a926b997388022a5fe5b4e291e6a23c353a1e18858d4b05d1df54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mds-mds-np0005604215-rwvxvg, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public) Feb 1 04:39:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aaa67d8cb79d39b31bd82202d5bf1b420176af306b56745e9b5840ff65ee52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 04:39:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aaa67d8cb79d39b31bd82202d5bf1b420176af306b56745e9b5840ff65ee52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 04:39:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aaa67d8cb79d39b31bd82202d5bf1b420176af306b56745e9b5840ff65ee52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 04:39:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aaa67d8cb79d39b31bd82202d5bf1b420176af306b56745e9b5840ff65ee52/merged/var/lib/ceph/mds/ceph-mds.np0005604215.rwvxvg supports timestamps until 2038 (0x7fffffff) Feb 1 04:39:14 localhost podman[276934]: 2026-02-01 09:39:14.327200953 +0000 UTC m=+0.048594969 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:39:14 localhost podman[276934]: 2026-02-01 09:39:14.430109414 +0000 UTC m=+0.151503420 container init 80bcdc70795a926b997388022a5fe5b4e291e6a23c353a1e18858d4b05d1df54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mds-mds-np0005604215-rwvxvg, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, version=7, build-date=2025-12-08T17:28:53Z, release=1764794109, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Feb 1 04:39:14 localhost podman[276934]: 2026-02-01 09:39:14.441946936 +0000 UTC m=+0.163340942 container start 80bcdc70795a926b997388022a5fe5b4e291e6a23c353a1e18858d4b05d1df54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mds-mds-np0005604215-rwvxvg, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, release=1764794109, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z) Feb 1 04:39:14 localhost bash[276934]: 80bcdc70795a926b997388022a5fe5b4e291e6a23c353a1e18858d4b05d1df54 Feb 1 04:39:14 localhost systemd[1]: Started Ceph mds.mds.np0005604215.rwvxvg for 33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 04:39:14 localhost ceph-mds[276952]: set uid:gid to 167:167 (ceph:ceph) Feb 1 04:39:14 localhost ceph-mds[276952]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Feb 1 04:39:14 localhost ceph-mds[276952]: main not setting numa affinity Feb 1 04:39:14 localhost ceph-mds[276952]: pidfile_write: ignore empty --pid-file Feb 1 04:39:14 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mds-mds-np0005604215-rwvxvg[276948]: starting mds.mds.np0005604215.rwvxvg at Feb 1 04:39:14 localhost ceph-mds[276952]: mds.mds.np0005604215.rwvxvg Updating MDS map to version 6 from mon.0 Feb 1 04:39:15 localhost ceph-mds[276952]: mds.mds.np0005604215.rwvxvg Updating MDS map to version 7 from mon.0 Feb 1 04:39:15 localhost ceph-mds[276952]: mds.mds.np0005604215.rwvxvg Monitors have assigned me to become a standby. Feb 1 04:39:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:39:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:39:19 localhost podman[276973]: 2026-02-01 09:39:19.875336924 +0000 UTC m=+0.086382302 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:39:19 localhost podman[276973]: 2026-02-01 09:39:19.908682965 +0000 UTC m=+0.119728353 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:39:19 localhost podman[276972]: 2026-02-01 09:39:19.923697984 +0000 UTC m=+0.134616497 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:39:19 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:39:19 localhost podman[276972]: 2026-02-01 09:39:19.960682699 +0000 UTC m=+0.171601232 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:39:19 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:39:20 localhost systemd[1]: tmp-crun.5IFeOy.mount: Deactivated successfully. Feb 1 04:39:20 localhost podman[277145]: 2026-02-01 09:39:20.843221239 +0000 UTC m=+0.090704158 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z) Feb 1 04:39:20 localhost podman[277145]: 2026-02-01 09:39:20.947999554 +0000 UTC m=+0.195482523 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, architecture=x86_64, RELEASE=main, version=7, maintainer=Guillaume Abrioux , vcs-type=git, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Feb 1 04:39:21 localhost systemd-logind[761]: Session 61 logged out. Waiting for processes to exit. Feb 1 04:39:21 localhost systemd[1]: session-61.scope: Deactivated successfully. Feb 1 04:39:21 localhost systemd-logind[761]: Removed session 61. Feb 1 04:39:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:39:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:39:28 localhost systemd[1]: tmp-crun.iGqWnf.mount: Deactivated successfully. Feb 1 04:39:28 localhost podman[277261]: 2026-02-01 09:39:28.873788536 +0000 UTC m=+0.080641957 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, release=1769056855, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, vendor=Red Hat, Inc.) Feb 1 04:39:28 localhost podman[277261]: 2026-02-01 09:39:28.915339665 +0000 UTC m=+0.122193036 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, release=1769056855) Feb 1 04:39:28 localhost podman[277262]: 2026-02-01 09:39:28.927023182 +0000 UTC m=+0.134112711 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127) Feb 1 04:39:28 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:39:28 localhost podman[277262]: 2026-02-01 09:39:28.95970317 +0000 UTC m=+0.166792749 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 1 04:39:28 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:39:30 localhost podman[236852]: time="2026-02-01T09:39:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:39:30 localhost podman[236852]: @ - - [01/Feb/2026:09:39:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149013 "" "Go-http-client/1.1" Feb 1 04:39:30 localhost podman[236852]: @ - - [01/Feb/2026:09:39:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16801 "" "Go-http-client/1.1" Feb 1 04:39:31 localhost openstack_network_exporter[239388]: ERROR 09:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:39:31 localhost openstack_network_exporter[239388]: Feb 1 04:39:31 localhost openstack_network_exporter[239388]: ERROR 09:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:39:31 localhost openstack_network_exporter[239388]: Feb 1 04:39:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:39:34 localhost systemd[1]: tmp-crun.7MDCDr.mount: Deactivated successfully. Feb 1 04:39:34 localhost podman[277299]: 2026-02-01 09:39:34.86509388 +0000 UTC m=+0.081197045 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:39:34 localhost podman[277299]: 2026-02-01 09:39:34.87484371 +0000 UTC m=+0.090946855 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:39:34 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:39:35 localhost nova_compute[274317]: 2026-02-01 09:39:35.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:35 localhost nova_compute[274317]: 2026-02-01 09:39:35.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:39:35 localhost nova_compute[274317]: 2026-02-01 09:39:35.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:39:35 localhost nova_compute[274317]: 2026-02-01 09:39:35.119 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:39:35 localhost nova_compute[274317]: 2026-02-01 09:39:35.120 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:36 localhost nova_compute[274317]: 2026-02-01 09:39:36.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:36 localhost nova_compute[274317]: 2026-02-01 09:39:36.116 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:36 localhost nova_compute[274317]: 2026-02-01 09:39:36.116 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.129 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.129 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.130 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.130 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.130 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.572 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.711 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.712 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12867MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.712 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.712 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.786 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.787 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:39:38 localhost nova_compute[274317]: 2026-02-01 09:39:38.812 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:39:39 localhost nova_compute[274317]: 2026-02-01 09:39:39.222 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:39:39 localhost nova_compute[274317]: 2026-02-01 09:39:39.228 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:39:39 localhost nova_compute[274317]: 2026-02-01 09:39:39.242 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:39:39 localhost nova_compute[274317]: 2026-02-01 09:39:39.244 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:39:39 localhost nova_compute[274317]: 2026-02-01 09:39:39.245 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:39:41 localhost nova_compute[274317]: 2026-02-01 09:39:41.245 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:39:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:39:41.755 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:39:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:39:41.755 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:39:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:39:41.755 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:39:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:39:42 localhost systemd[1]: tmp-crun.gZndlo.mount: Deactivated successfully. Feb 1 04:39:42 localhost podman[277361]: 2026-02-01 09:39:42.859412376 +0000 UTC m=+0.074936884 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:39:42 localhost podman[277361]: 2026-02-01 09:39:42.896568346 +0000 UTC m=+0.112092884 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:39:42 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:39:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:39:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:39:50 localhost podman[277451]: 2026-02-01 09:39:50.869159615 +0000 UTC m=+0.082929924 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 1 04:39:50 localhost podman[277452]: 2026-02-01 09:39:50.921079567 +0000 UTC m=+0.131072818 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:39:50 localhost podman[277451]: 2026-02-01 09:39:50.95153725 +0000 UTC m=+0.165307609 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:39:50 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:39:51 localhost podman[277452]: 2026-02-01 09:39:51.004841909 +0000 UTC m=+0.214834940 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:39:51 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:39:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:39:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:39:59 localhost systemd[1]: tmp-crun.nvqiz1.mount: Deactivated successfully. Feb 1 04:39:59 localhost podman[277501]: 2026-02-01 09:39:59.864818883 +0000 UTC m=+0.073218426 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:39:59 localhost podman[277501]: 2026-02-01 09:39:59.896547189 +0000 UTC m=+0.104946742 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS) Feb 1 04:39:59 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:39:59 localhost systemd[1]: tmp-crun.6dLjqS.mount: Deactivated successfully. Feb 1 04:39:59 localhost podman[277500]: 2026-02-01 09:39:59.968115776 +0000 UTC m=+0.180049049 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, architecture=x86_64, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:40:00 localhost podman[277500]: 2026-02-01 09:40:00.005586438 +0000 UTC m=+0.217519741 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=openstack_network_exporter, maintainer=Red Hat, Inc.) Feb 1 04:40:00 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:40:00 localhost podman[236852]: time="2026-02-01T09:40:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:40:00 localhost podman[236852]: @ - - [01/Feb/2026:09:40:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149013 "" "Go-http-client/1.1" Feb 1 04:40:00 localhost podman[236852]: @ - - [01/Feb/2026:09:40:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16804 "" "Go-http-client/1.1" Feb 1 04:40:01 localhost openstack_network_exporter[239388]: ERROR 09:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:40:01 localhost openstack_network_exporter[239388]: Feb 1 04:40:01 localhost openstack_network_exporter[239388]: ERROR 09:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:40:01 localhost openstack_network_exporter[239388]: Feb 1 04:40:01 localhost systemd[1]: session-62.scope: Deactivated successfully. Feb 1 04:40:01 localhost systemd[1]: session-62.scope: Consumed 1.345s CPU time. Feb 1 04:40:01 localhost systemd-logind[761]: Session 62 logged out. Waiting for processes to exit. Feb 1 04:40:01 localhost systemd-logind[761]: Removed session 62. Feb 1 04:40:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:40:05 localhost podman[277539]: 2026-02-01 09:40:05.879407946 +0000 UTC m=+0.090734889 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:40:05 localhost podman[277539]: 2026-02-01 09:40:05.946021176 +0000 UTC m=+0.157348119 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 1 04:40:05 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:40:11 localhost systemd[1]: Stopping User Manager for UID 1003... Feb 1 04:40:11 localhost systemd[276223]: Activating special unit Exit the Session... Feb 1 04:40:11 localhost systemd[276223]: Stopped target Main User Target. Feb 1 04:40:11 localhost systemd[276223]: Stopped target Basic System. Feb 1 04:40:11 localhost systemd[276223]: Stopped target Paths. Feb 1 04:40:11 localhost systemd[276223]: Stopped target Sockets. Feb 1 04:40:11 localhost systemd[276223]: Stopped target Timers. Feb 1 04:40:11 localhost systemd[276223]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 1 04:40:11 localhost systemd[276223]: Stopped Daily Cleanup of User's Temporary Directories. Feb 1 04:40:11 localhost systemd[276223]: Closed D-Bus User Message Bus Socket. Feb 1 04:40:11 localhost systemd[276223]: Stopped Create User's Volatile Files and Directories. Feb 1 04:40:11 localhost systemd[276223]: Removed slice User Application Slice. Feb 1 04:40:11 localhost systemd[276223]: Reached target Shutdown. Feb 1 04:40:11 localhost systemd[276223]: Finished Exit the Session. Feb 1 04:40:11 localhost systemd[276223]: Reached target Exit the Session. Feb 1 04:40:11 localhost systemd[1]: user@1003.service: Deactivated successfully. Feb 1 04:40:11 localhost systemd[1]: Stopped User Manager for UID 1003. Feb 1 04:40:11 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Feb 1 04:40:11 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Feb 1 04:40:11 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Feb 1 04:40:11 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Feb 1 04:40:11 localhost systemd[1]: Removed slice User Slice of UID 1003. Feb 1 04:40:11 localhost systemd[1]: user-1003.slice: Consumed 1.717s CPU time. Feb 1 04:40:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:40:13 localhost podman[277575]: 2026-02-01 09:40:13.653239153 +0000 UTC m=+0.083023127 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:40:13 localhost podman[277575]: 2026-02-01 09:40:13.686722939 +0000 UTC m=+0.116506923 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:40:13 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:40:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:40:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:40:21 localhost podman[277635]: 2026-02-01 09:40:21.876408004 +0000 UTC m=+0.086859928 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:40:21 localhost podman[277635]: 2026-02-01 09:40:21.883968251 +0000 UTC m=+0.094420215 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:40:21 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:40:21 localhost podman[277634]: 2026-02-01 09:40:21.925120376 +0000 UTC m=+0.138885962 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 1 04:40:21 localhost podman[277634]: 2026-02-01 09:40:21.991276051 +0000 UTC m=+0.205041627 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:40:22 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:40:30 localhost podman[236852]: time="2026-02-01T09:40:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:40:30 localhost podman[236852]: @ - - [01/Feb/2026:09:40:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149013 "" "Go-http-client/1.1" Feb 1 04:40:30 localhost podman[236852]: @ - - [01/Feb/2026:09:40:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16804 "" "Go-http-client/1.1" Feb 1 04:40:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:40:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:40:30 localhost podman[277684]: 2026-02-01 09:40:30.876315135 +0000 UTC m=+0.082194979 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible) Feb 1 04:40:30 localhost podman[277684]: 2026-02-01 09:40:30.906780069 +0000 UTC m=+0.112659923 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 1 04:40:30 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:40:30 localhost podman[277683]: 2026-02-01 09:40:30.984214816 +0000 UTC m=+0.194556591 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.7, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7) Feb 1 04:40:31 localhost podman[277683]: 2026-02-01 09:40:31.025817148 +0000 UTC m=+0.236158913 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, release=1769056855, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible) Feb 1 04:40:31 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:40:31 localhost openstack_network_exporter[239388]: ERROR 09:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:40:31 localhost openstack_network_exporter[239388]: Feb 1 04:40:31 localhost openstack_network_exporter[239388]: ERROR 09:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:40:31 localhost openstack_network_exporter[239388]: Feb 1 04:40:34 localhost nova_compute[274317]: 2026-02-01 09:40:34.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:34 localhost nova_compute[274317]: 2026-02-01 09:40:34.102 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 1 04:40:34 localhost nova_compute[274317]: 2026-02-01 09:40:34.123 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 1 04:40:34 localhost nova_compute[274317]: 2026-02-01 09:40:34.124 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:34 localhost nova_compute[274317]: 2026-02-01 09:40:34.125 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 1 04:40:34 localhost nova_compute[274317]: 2026-02-01 09:40:34.136 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:36 localhost nova_compute[274317]: 2026-02-01 09:40:36.145 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:40:36 localhost podman[277722]: 2026-02-01 09:40:36.836518626 +0000 UTC m=+0.088468232 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:40:36 localhost podman[277722]: 2026-02-01 09:40:36.845842703 +0000 UTC m=+0.097792359 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:40:36 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:40:37 localhost nova_compute[274317]: 2026-02-01 09:40:37.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:37 localhost nova_compute[274317]: 2026-02-01 09:40:37.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:40:37 localhost nova_compute[274317]: 2026-02-01 09:40:37.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:40:37 localhost nova_compute[274317]: 2026-02-01 09:40:37.121 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:40:37 localhost nova_compute[274317]: 2026-02-01 09:40:37.121 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:38 localhost nova_compute[274317]: 2026-02-01 09:40:38.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:38 localhost nova_compute[274317]: 2026-02-01 09:40:38.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:38 localhost nova_compute[274317]: 2026-02-01 09:40:38.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.124 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.124 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.124 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.125 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.125 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.567 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.769 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.771 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12872MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.771 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.771 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.989 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:40:39 localhost nova_compute[274317]: 2026-02-01 09:40:39.989 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.130 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.225 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.226 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.252 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.271 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.288 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.749 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.757 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.781 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.784 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:40:40 localhost nova_compute[274317]: 2026-02-01 09:40:40.784 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:40:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:40:41.756 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:40:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:40:41.757 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:40:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:40:41.758 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:40:42 localhost nova_compute[274317]: 2026-02-01 09:40:42.785 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:40:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:40:43 localhost podman[277839]: 2026-02-01 09:40:43.91457424 +0000 UTC m=+0.089951812 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:40:43 localhost podman[277839]: 2026-02-01 09:40:43.92341014 +0000 UTC m=+0.098787702 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:40:43 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:40:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5505 writes, 24K keys, 5505 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5505 writes, 787 syncs, 6.99 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 72 writes, 206 keys, 72 commit groups, 1.0 writes per commit group, ingest: 0.36 MB, 0.00 MB/s#012Interval WAL: 72 writes, 36 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:40:46 localhost podman[277940]: Feb 1 04:40:46 localhost podman[277940]: 2026-02-01 09:40:46.949439139 +0000 UTC m=+0.082091776 container create 5d054bf9791526d04c802ba00d7f85b01b870783cbea9284629f2489e3a32037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_grothendieck, ceph=True, maintainer=Guillaume Abrioux , release=1764794109, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z) Feb 1 04:40:46 localhost systemd[1]: Started libpod-conmon-5d054bf9791526d04c802ba00d7f85b01b870783cbea9284629f2489e3a32037.scope. Feb 1 04:40:47 localhost systemd[1]: Started libcrun container. Feb 1 04:40:47 localhost podman[277940]: 2026-02-01 09:40:46.913717936 +0000 UTC m=+0.046370573 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:40:47 localhost podman[277940]: 2026-02-01 09:40:47.020054614 +0000 UTC m=+0.152707261 container init 5d054bf9791526d04c802ba00d7f85b01b870783cbea9284629f2489e3a32037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_grothendieck, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, release=1764794109, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc.) Feb 1 04:40:47 localhost podman[277940]: 2026-02-01 09:40:47.032217867 +0000 UTC m=+0.164870514 container start 5d054bf9791526d04c802ba00d7f85b01b870783cbea9284629f2489e3a32037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_grothendieck, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:40:47 localhost podman[277940]: 2026-02-01 09:40:47.03260621 +0000 UTC m=+0.165258887 container attach 5d054bf9791526d04c802ba00d7f85b01b870783cbea9284629f2489e3a32037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_grothendieck, vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1764794109, name=rhceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:40:47 localhost reverent_grothendieck[277955]: 167 167 Feb 1 04:40:47 localhost systemd[1]: libpod-5d054bf9791526d04c802ba00d7f85b01b870783cbea9284629f2489e3a32037.scope: Deactivated successfully. Feb 1 04:40:47 localhost podman[277940]: 2026-02-01 09:40:47.036334526 +0000 UTC m=+0.168987193 container died 5d054bf9791526d04c802ba00d7f85b01b870783cbea9284629f2489e3a32037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_grothendieck, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, vcs-type=git, ceph=True, release=1764794109, io.openshift.expose-services=) Feb 1 04:40:47 localhost podman[277960]: 2026-02-01 09:40:47.137130646 +0000 UTC m=+0.088057358 container remove 5d054bf9791526d04c802ba00d7f85b01b870783cbea9284629f2489e3a32037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_grothendieck, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=) Feb 1 04:40:47 localhost systemd[1]: libpod-conmon-5d054bf9791526d04c802ba00d7f85b01b870783cbea9284629f2489e3a32037.scope: Deactivated successfully. Feb 1 04:40:47 localhost systemd[1]: Reloading. Feb 1 04:40:47 localhost systemd-rc-local-generator[278000]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:40:47 localhost systemd-sysv-generator[278005]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: var-lib-containers-storage-overlay-9f75573cc0ffefa021332aff24a0c053100173a63aaee9f562d403af5a323898-merged.mount: Deactivated successfully. Feb 1 04:40:47 localhost systemd[1]: Reloading. Feb 1 04:40:47 localhost systemd-rc-local-generator[278040]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:40:47 localhost systemd-sysv-generator[278046]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:40:47 localhost systemd[1]: Starting Ceph mgr.np0005604215.uhhqtv for 33fac0b9-80c7-560f-918a-c92d3021ca1e... Feb 1 04:40:48 localhost podman[278107]: Feb 1 04:40:48 localhost podman[278107]: 2026-02-01 09:40:48.324602861 +0000 UTC m=+0.077812901 container create 3e1e2afd626fdcaa281a17632f441dfc56dec59adcb1f539c6182d51b14f5b79 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:40:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127ade48dc529dd3f486f353811e1b3227dc30b60c3f77c1d2176b946810a8e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 04:40:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127ade48dc529dd3f486f353811e1b3227dc30b60c3f77c1d2176b946810a8e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 04:40:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127ade48dc529dd3f486f353811e1b3227dc30b60c3f77c1d2176b946810a8e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 04:40:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127ade48dc529dd3f486f353811e1b3227dc30b60c3f77c1d2176b946810a8e6/merged/var/lib/ceph/mgr/ceph-np0005604215.uhhqtv supports timestamps until 2038 (0x7fffffff) Feb 1 04:40:48 localhost podman[278107]: 2026-02-01 09:40:48.386575253 +0000 UTC m=+0.139785293 container init 3e1e2afd626fdcaa281a17632f441dfc56dec59adcb1f539c6182d51b14f5b79 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv, distribution-scope=public, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, release=1764794109, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , io.openshift.expose-services=) Feb 1 04:40:48 localhost podman[278107]: 2026-02-01 09:40:48.292766991 +0000 UTC m=+0.045977061 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:40:48 localhost podman[278107]: 2026-02-01 09:40:48.39562628 +0000 UTC m=+0.148836330 container start 3e1e2afd626fdcaa281a17632f441dfc56dec59adcb1f539c6182d51b14f5b79 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public) Feb 1 04:40:48 localhost bash[278107]: 3e1e2afd626fdcaa281a17632f441dfc56dec59adcb1f539c6182d51b14f5b79 Feb 1 04:40:48 localhost systemd[1]: Started Ceph mgr.np0005604215.uhhqtv for 33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 04:40:48 localhost ceph-mgr[278126]: set uid:gid to 167:167 (ceph:ceph) Feb 1 04:40:48 localhost ceph-mgr[278126]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Feb 1 04:40:48 localhost ceph-mgr[278126]: pidfile_write: ignore empty --pid-file Feb 1 04:40:48 localhost ceph-mgr[278126]: mgr[py] Loading python module 'alerts' Feb 1 04:40:48 localhost ceph-mgr[278126]: mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 1 04:40:48 localhost ceph-mgr[278126]: mgr[py] Loading python module 'balancer' Feb 1 04:40:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:48.569+0000 7fcf09996140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 1 04:40:48 localhost ceph-mgr[278126]: mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 1 04:40:48 localhost ceph-mgr[278126]: mgr[py] Loading python module 'cephadm' Feb 1 04:40:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:48.635+0000 7fcf09996140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 1 04:40:49 localhost ceph-mgr[278126]: mgr[py] Loading python module 'crash' Feb 1 04:40:49 localhost ceph-mgr[278126]: mgr[py] Module crash has missing NOTIFY_TYPES member Feb 1 04:40:49 localhost ceph-mgr[278126]: mgr[py] Loading python module 'dashboard' Feb 1 04:40:49 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:49.272+0000 7fcf09996140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Feb 1 04:40:49 localhost ceph-mgr[278126]: mgr[py] Loading python module 'devicehealth' Feb 1 04:40:49 localhost ceph-mgr[278126]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 1 04:40:49 localhost ceph-mgr[278126]: mgr[py] Loading python module 'diskprediction_local' Feb 1 04:40:49 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:49.811+0000 7fcf09996140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 1 04:40:49 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Feb 1 04:40:49 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Feb 1 04:40:49 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: from numpy import show_config as show_numpy_config Feb 1 04:40:49 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:49.945+0000 7fcf09996140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 1 04:40:49 localhost ceph-mgr[278126]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 1 04:40:49 localhost ceph-mgr[278126]: mgr[py] Loading python module 'influx' Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Module influx has missing NOTIFY_TYPES member Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Loading python module 'insights' Feb 1 04:40:50 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:50.002+0000 7fcf09996140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Feb 1 04:40:50 localhost systemd[1]: tmp-crun.t2YFWX.mount: Deactivated successfully. Feb 1 04:40:50 localhost podman[278280]: 2026-02-01 09:40:50.027281075 +0000 UTC m=+0.112441486 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , version=7, GIT_CLEAN=True, name=rhceph, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=) Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Loading python module 'iostat' Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Loading python module 'k8sevents' Feb 1 04:40:50 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:50.118+0000 7fcf09996140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 1 04:40:50 localhost podman[278280]: 2026-02-01 09:40:50.15921529 +0000 UTC m=+0.244375691 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, distribution-scope=public, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:40:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5317 writes, 23K keys, 5317 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5317 writes, 693 syncs, 7.67 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 94 writes, 314 keys, 94 commit groups, 1.0 writes per commit group, ingest: 0.36 MB, 0.00 MB/s#012Interval WAL: 94 writes, 35 syncs, 2.69 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Loading python module 'localpool' Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Loading python module 'mds_autoscaler' Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Loading python module 'mirroring' Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Loading python module 'nfs' Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 1 04:40:50 localhost ceph-mgr[278126]: mgr[py] Loading python module 'orchestrator' Feb 1 04:40:50 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:50.887+0000 7fcf09996140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Loading python module 'osd_perf_query' Feb 1 04:40:51 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:51.034+0000 7fcf09996140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Loading python module 'osd_support' Feb 1 04:40:51 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:51.099+0000 7fcf09996140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Loading python module 'pg_autoscaler' Feb 1 04:40:51 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:51.156+0000 7fcf09996140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:51.224+0000 7fcf09996140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Loading python module 'progress' Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Module progress has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Loading python module 'prometheus' Feb 1 04:40:51 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:51.284+0000 7fcf09996140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Loading python module 'rbd_support' Feb 1 04:40:51 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:51.588+0000 7fcf09996140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Loading python module 'restful' Feb 1 04:40:51 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:51.670+0000 7fcf09996140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 1 04:40:51 localhost ceph-mgr[278126]: mgr[py] Loading python module 'rgw' Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:52.043+0000 7fcf09996140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Loading python module 'rook' Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Module rook has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Loading python module 'selftest' Feb 1 04:40:52 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:52.491+0000 7fcf09996140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Loading python module 'snap_schedule' Feb 1 04:40:52 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:52.554+0000 7fcf09996140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Loading python module 'stats' Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Loading python module 'status' Feb 1 04:40:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:40:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Module status has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Loading python module 'telegraf' Feb 1 04:40:52 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:52.745+0000 7fcf09996140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:52.807+0000 7fcf09996140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Loading python module 'telemetry' Feb 1 04:40:52 localhost systemd[1]: tmp-crun.UyNTar.mount: Deactivated successfully. Feb 1 04:40:52 localhost podman[278524]: 2026-02-01 09:40:52.859618252 +0000 UTC m=+0.097402746 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127) Feb 1 04:40:52 localhost systemd[1]: tmp-crun.174CZk.mount: Deactivated successfully. Feb 1 04:40:52 localhost podman[278524]: 2026-02-01 09:40:52.897806596 +0000 UTC m=+0.135591110 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:40:52 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost ceph-mgr[278126]: mgr[py] Loading python module 'test_orchestrator' Feb 1 04:40:52 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:52.943+0000 7fcf09996140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 1 04:40:52 localhost podman[278525]: 2026-02-01 09:40:52.909700151 +0000 UTC m=+0.147384982 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:40:52 localhost podman[278525]: 2026-02-01 09:40:52.992662855 +0000 UTC m=+0.230347696 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:40:53 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:40:53 localhost ceph-mgr[278126]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 1 04:40:53 localhost ceph-mgr[278126]: mgr[py] Loading python module 'volumes' Feb 1 04:40:53 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:53.090+0000 7fcf09996140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 1 04:40:53 localhost ceph-mgr[278126]: mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 1 04:40:53 localhost ceph-mgr[278126]: mgr[py] Loading python module 'zabbix' Feb 1 04:40:53 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:53.278+0000 7fcf09996140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 1 04:40:53 localhost ceph-mgr[278126]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 1 04:40:53 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:40:53.337+0000 7fcf09996140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 1 04:40:53 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c2f70f91e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 1 04:40:53 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.103:6800/1614340691 Feb 1 04:41:00 localhost podman[236852]: time="2026-02-01T09:41:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:41:00 localhost podman[236852]: @ - - [01/Feb/2026:09:41:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 151207 "" "Go-http-client/1.1" Feb 1 04:41:00 localhost podman[236852]: @ - - [01/Feb/2026:09:41:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17288 "" "Go-http-client/1.1" Feb 1 04:41:01 localhost openstack_network_exporter[239388]: ERROR 09:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:41:01 localhost openstack_network_exporter[239388]: Feb 1 04:41:01 localhost openstack_network_exporter[239388]: ERROR 09:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:41:01 localhost openstack_network_exporter[239388]: Feb 1 04:41:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:41:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:41:01 localhost podman[278607]: 2026-02-01 09:41:01.747646806 +0000 UTC m=+0.085204232 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, name=ubi9/ubi-minimal, release=1769056855, io.buildah.version=1.33.7, build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git) Feb 1 04:41:01 localhost podman[278607]: 2026-02-01 09:41:01.75454825 +0000 UTC m=+0.092105676 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-type=git, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64) Feb 1 04:41:01 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:41:01 localhost podman[278608]: 2026-02-01 09:41:01.791242858 +0000 UTC m=+0.127893866 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:41:01 localhost podman[278608]: 2026-02-01 09:41:01.825815689 +0000 UTC m=+0.162466727 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:41:01 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:41:02 localhost podman[278705]: Feb 1 04:41:02 localhost podman[278705]: 2026-02-01 09:41:02.313643532 +0000 UTC m=+0.076641477 container create da202e30075422f929b294226b6db783a1ccde3b106e2aacd84eb9f560d669c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_faraday, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, com.redhat.component=rhceph-container, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, release=1764794109, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Feb 1 04:41:02 localhost systemd[1]: Started libpod-conmon-da202e30075422f929b294226b6db783a1ccde3b106e2aacd84eb9f560d669c5.scope. Feb 1 04:41:02 localhost systemd[1]: Started libcrun container. Feb 1 04:41:02 localhost podman[278705]: 2026-02-01 09:41:02.281261458 +0000 UTC m=+0.044259433 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:41:02 localhost podman[278705]: 2026-02-01 09:41:02.390716041 +0000 UTC m=+0.153713976 container init da202e30075422f929b294226b6db783a1ccde3b106e2aacd84eb9f560d669c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_faraday, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:41:02 localhost podman[278705]: 2026-02-01 09:41:02.401040642 +0000 UTC m=+0.164038587 container start da202e30075422f929b294226b6db783a1ccde3b106e2aacd84eb9f560d669c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_faraday, vcs-type=git, ceph=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, architecture=x86_64, io.buildah.version=1.41.4, release=1764794109, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux ) Feb 1 04:41:02 localhost podman[278705]: 2026-02-01 09:41:02.401669201 +0000 UTC m=+0.164667216 container attach da202e30075422f929b294226b6db783a1ccde3b106e2aacd84eb9f560d669c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_faraday, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=1764794109, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:41:02 localhost kind_faraday[278720]: 167 167 Feb 1 04:41:02 localhost systemd[1]: libpod-da202e30075422f929b294226b6db783a1ccde3b106e2aacd84eb9f560d669c5.scope: Deactivated successfully. Feb 1 04:41:02 localhost podman[278705]: 2026-02-01 09:41:02.404332124 +0000 UTC m=+0.167330079 container died da202e30075422f929b294226b6db783a1ccde3b106e2aacd84eb9f560d669c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_faraday, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.41.4, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, RELEASE=main, GIT_CLEAN=True) Feb 1 04:41:02 localhost podman[278725]: 2026-02-01 09:41:02.472066693 +0000 UTC m=+0.060042942 container remove da202e30075422f929b294226b6db783a1ccde3b106e2aacd84eb9f560d669c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_faraday, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, release=1764794109, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, name=rhceph) Feb 1 04:41:02 localhost systemd[1]: libpod-conmon-da202e30075422f929b294226b6db783a1ccde3b106e2aacd84eb9f560d669c5.scope: Deactivated successfully. Feb 1 04:41:02 localhost podman[278742]: Feb 1 04:41:02 localhost podman[278742]: 2026-02-01 09:41:02.574186358 +0000 UTC m=+0.067069879 container create fb238d7504b1c940a89bbda9a453f78d5df44d9139964b6bff2f80bac05a8a14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wu, build-date=2025-12-08T17:28:53Z, release=1764794109, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:41:02 localhost systemd[1]: Started libpod-conmon-fb238d7504b1c940a89bbda9a453f78d5df44d9139964b6bff2f80bac05a8a14.scope. Feb 1 04:41:02 localhost systemd[1]: Started libcrun container. Feb 1 04:41:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702e3ad5d22d14a762824bbbd34de918998d569142807e785cde7a03f4abe1b/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Feb 1 04:41:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702e3ad5d22d14a762824bbbd34de918998d569142807e785cde7a03f4abe1b/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Feb 1 04:41:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702e3ad5d22d14a762824bbbd34de918998d569142807e785cde7a03f4abe1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 04:41:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702e3ad5d22d14a762824bbbd34de918998d569142807e785cde7a03f4abe1b/merged/var/lib/ceph/mon/ceph-np0005604215 supports timestamps until 2038 (0x7fffffff) Feb 1 04:41:02 localhost podman[278742]: 2026-02-01 09:41:02.632704713 +0000 UTC m=+0.125588264 container init fb238d7504b1c940a89bbda9a453f78d5df44d9139964b6bff2f80bac05a8a14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wu, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=1764794109, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, version=7, name=rhceph, io.buildah.version=1.41.4, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:41:02 localhost podman[278742]: 2026-02-01 09:41:02.643064204 +0000 UTC m=+0.135947745 container start fb238d7504b1c940a89bbda9a453f78d5df44d9139964b6bff2f80bac05a8a14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wu, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux ) Feb 1 04:41:02 localhost podman[278742]: 2026-02-01 09:41:02.643329122 +0000 UTC m=+0.136212673 container attach fb238d7504b1c940a89bbda9a453f78d5df44d9139964b6bff2f80bac05a8a14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wu, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, release=1764794109, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:41:02 localhost podman[278742]: 2026-02-01 09:41:02.550417862 +0000 UTC m=+0.043301443 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:41:02 localhost systemd[1]: var-lib-containers-storage-overlay-674c251a4adb6769dffcdf1ab7f1f6b0328d62740216215cf5b16aa3ecaf8094-merged.mount: Deactivated successfully. Feb 1 04:41:02 localhost systemd[1]: libpod-fb238d7504b1c940a89bbda9a453f78d5df44d9139964b6bff2f80bac05a8a14.scope: Deactivated successfully. Feb 1 04:41:02 localhost podman[278742]: 2026-02-01 09:41:02.770572597 +0000 UTC m=+0.263456158 container died fb238d7504b1c940a89bbda9a453f78d5df44d9139964b6bff2f80bac05a8a14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wu, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.41.4, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, RELEASE=main, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:41:02 localhost systemd[1]: tmp-crun.bv3AiM.mount: Deactivated successfully. Feb 1 04:41:02 localhost systemd[1]: var-lib-containers-storage-overlay-c702e3ad5d22d14a762824bbbd34de918998d569142807e785cde7a03f4abe1b-merged.mount: Deactivated successfully. Feb 1 04:41:02 localhost podman[278783]: 2026-02-01 09:41:02.87162573 +0000 UTC m=+0.087885006 container remove fb238d7504b1c940a89bbda9a453f78d5df44d9139964b6bff2f80bac05a8a14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wu, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git, version=7, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:41:02 localhost systemd[1]: libpod-conmon-fb238d7504b1c940a89bbda9a453f78d5df44d9139964b6bff2f80bac05a8a14.scope: Deactivated successfully. Feb 1 04:41:02 localhost systemd[1]: Reloading. Feb 1 04:41:03 localhost systemd-rc-local-generator[278821]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:41:03 localhost systemd-sysv-generator[278824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: Reloading. Feb 1 04:41:03 localhost systemd-rc-local-generator[278863]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:41:03 localhost systemd-sysv-generator[278869]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:41:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:41:03 localhost systemd[1]: Starting Ceph mon.np0005604215 for 33fac0b9-80c7-560f-918a-c92d3021ca1e... Feb 1 04:41:03 localhost podman[278931]: Feb 1 04:41:03 localhost podman[278931]: 2026-02-01 09:41:03.920654519 +0000 UTC m=+0.067560725 container create e5584900e40475bfb0e0992a38ca26dd007e21b74ac9ad70262abebad82b75d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mon-np0005604215, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph) Feb 1 04:41:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83968a87c9b2ae83e102a25fe5279ef93ea6c64bdb6aa577d60da58f4409de1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 04:41:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83968a87c9b2ae83e102a25fe5279ef93ea6c64bdb6aa577d60da58f4409de1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 04:41:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83968a87c9b2ae83e102a25fe5279ef93ea6c64bdb6aa577d60da58f4409de1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 04:41:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83968a87c9b2ae83e102a25fe5279ef93ea6c64bdb6aa577d60da58f4409de1/merged/var/lib/ceph/mon/ceph-np0005604215 supports timestamps until 2038 (0x7fffffff) Feb 1 04:41:03 localhost podman[278931]: 2026-02-01 09:41:03.971116103 +0000 UTC m=+0.118022319 container init e5584900e40475bfb0e0992a38ca26dd007e21b74ac9ad70262abebad82b75d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mon-np0005604215, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public) Feb 1 04:41:03 localhost systemd[1]: tmp-crun.7ODqRs.mount: Deactivated successfully. Feb 1 04:41:03 localhost podman[278931]: 2026-02-01 09:41:03.889264186 +0000 UTC m=+0.036170432 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:41:04 localhost ceph-mon[278949]: set uid:gid to 167:167 (ceph:ceph) Feb 1 04:41:04 localhost ceph-mon[278949]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Feb 1 04:41:04 localhost bash[278931]: e5584900e40475bfb0e0992a38ca26dd007e21b74ac9ad70262abebad82b75d8 Feb 1 04:41:04 localhost ceph-mon[278949]: pidfile_write: ignore empty --pid-file Feb 1 04:41:04 localhost podman[278931]: 2026-02-01 09:41:04.024920441 +0000 UTC m=+0.171826647 container start e5584900e40475bfb0e0992a38ca26dd007e21b74ac9ad70262abebad82b75d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mon-np0005604215, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, version=7, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:41:04 localhost systemd[1]: Started Ceph mon.np0005604215 for 33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 04:41:04 localhost ceph-mon[278949]: load: jerasure load: lrc Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: RocksDB version: 7.9.2 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Git sha 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Compile date 2025-09-23 00:00:00 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: DB SUMMARY Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: DB Session ID: 7PKSWXLLH9M8NB5FULPW Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: CURRENT file: CURRENT Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: IDENTITY file: IDENTITY Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005604215/store.db dir, Total Num: 0, files: Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005604215/store.db: 000004.log size: 761 ; Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.error_if_exists: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.create_if_missing: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.paranoid_checks: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.env: 0x560754d4f9e0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.fs: PosixFileSystem Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.info_log: 0x560755f86d20 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_file_opening_threads: 16 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.statistics: (nil) Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.use_fsync: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_log_file_size: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.log_file_time_to_roll: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.keep_log_file_num: 1000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.recycle_log_file_num: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.allow_fallocate: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.allow_mmap_reads: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.allow_mmap_writes: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.use_direct_reads: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.create_missing_column_families: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.db_log_dir: Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.wal_dir: Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.table_cache_numshardbits: 6 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.advise_random_on_open: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.db_write_buffer_size: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.write_buffer_manager: 0x560755f97540 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.use_adaptive_mutex: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.rate_limiter: (nil) Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.wal_recovery_mode: 2 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.enable_thread_tracking: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.enable_pipelined_write: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.unordered_write: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.row_cache: None Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.wal_filter: None Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.allow_ingest_behind: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.two_write_queues: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.manual_wal_flush: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.wal_compression: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.atomic_flush: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.persist_stats_to_disk: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.log_readahead_size: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.best_efforts_recovery: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.allow_data_in_errors: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.db_host_id: __hostname__ Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.enforce_single_del_contracts: true Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_background_jobs: 2 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_background_compactions: -1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_subcompactions: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.delayed_write_rate : 16777216 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_total_wal_size: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.stats_dump_period_sec: 600 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.stats_persist_period_sec: 600 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_open_files: -1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bytes_per_sync: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_readahead_size: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_background_flushes: -1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Compression algorithms supported: Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: #011kZSTD supported: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: #011kXpressCompression supported: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: #011kBZip2Compression supported: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: #011kLZ4Compression supported: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: #011kZlibCompression supported: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: #011kSnappyCompression supported: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: DMutex implementation: pthread_mutex_t Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005604215/store.db/MANIFEST-000005 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.merge_operator: Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_filter: None Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_filter_factory: None Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.sst_partitioner_factory: None Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560755f86980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x560755f83350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.write_buffer_size: 33554432 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_write_buffer_number: 2 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression: NoCompression Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression: Disabled Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.prefix_extractor: nullptr Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.num_levels: 7 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression_opts.level: 32767 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression_opts.enabled: false Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bytes_for_level_base: 268435456 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.arena_block_size: 1048576 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.table_properties_collectors: Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.inplace_update_support: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.bloom_locality: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.max_successive_merges: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.force_consistency_checks: 1 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.ttl: 2592000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.enable_blob_files: false Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.min_blob_size: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.blob_file_size: 268435456 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005604215/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c9a40fa3-7e53-4325-8a76-a86e4a0fff5d Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938864045657, "job": 1, "event": "recovery_started", "wal_files": [4]} Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938864048779, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938864, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938864049164, "job": 1, "event": "recovery_finished"} Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560755faae00 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: DB pointer 0x5607560a0000 Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:41:04 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560755f83350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215 does not exist in monmap, will attempt to join an existing cluster Feb 1 04:41:04 localhost ceph-mon[278949]: using public_addr v2:172.18.0.108:0/0 -> [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] Feb 1 04:41:04 localhost ceph-mon[278949]: starting mon.np0005604215 rank -1 at public addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] at bind addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005604215 fsid 33fac0b9-80c7-560f-918a-c92d3021ca1e Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(???) e0 preinit fsid 33fac0b9-80c7-560f-918a-c92d3021ca1e Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(synchronizing) e3 sync_obtain_latest_monmap Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(synchronizing) e3 sync_obtain_latest_monmap obtained monmap e3 Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(synchronizing).mds e16 new map Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-01T07:59:04.480309+0000#012modified#0112026-02-01T09:39:55.510678+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01179#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26329}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26329 members: 26329#012[mds.mds.np0005604212.tkdkxt{0:26329} state up:active seq 12 addr [v2:172.18.0.106:6808/1133321306,v1:172.18.0.106:6809/1133321306] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005604215.rwvxvg{-1:16872} state up:standby seq 1 addr [v2:172.18.0.108:6808/2262553558,v1:172.18.0.108:6809/2262553558] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005604213.jdbvyh{-1:16878} state up:standby seq 1 addr [v2:172.18.0.107:6808/3323601884,v1:172.18.0.107:6809/3323601884] compat {c=[1],r=[1],i=[17ff]}] Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(synchronizing).osd e81 crush map has features 3314933000852226048, adjusting msgr requires Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(synchronizing).osd e81 crush map has features 288514051259236352, adjusting msgr requires Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(synchronizing).osd e81 crush map has features 288514051259236352, adjusting msgr requires Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(synchronizing).osd e81 crush map has features 288514051259236352, adjusting msgr requires Feb 1 04:41:04 localhost ceph-mon[278949]: Removing key for mds.mds.np0005604210.yulljq Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth rm", "entity": "mds.mds.np0005604210.yulljq"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd='[{"prefix": "auth rm", "entity": "mds.mds.np0005604210.yulljq"}]': finished Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Removing daemon mds.mds.np0005604211.ggsxcc from np0005604211.localdomain -- ports [] Feb 1 04:41:04 localhost ceph-mon[278949]: Removing key for mds.mds.np0005604211.ggsxcc Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth rm", "entity": "mds.mds.np0005604211.ggsxcc"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd='[{"prefix": "auth rm", "entity": "mds.mds.np0005604211.ggsxcc"}]': finished Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label mgr to host np0005604212.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label mgr to host np0005604213.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label mgr to host np0005604215.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Saving service mgr spec with placement label:mgr Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Feb 1 04:41:04 localhost ceph-mon[278949]: Deploying daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Feb 1 04:41:04 localhost ceph-mon[278949]: Deploying daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label mon to host np0005604209.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label _admin to host np0005604209.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Feb 1 04:41:04 localhost ceph-mon[278949]: Deploying daemon mgr.np0005604215.uhhqtv on np0005604215.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label mon to host np0005604210.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label _admin to host np0005604210.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label mon to host np0005604211.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label _admin to host np0005604211.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label mon to host np0005604212.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label _admin to host np0005604212.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label mon to host np0005604213.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:04 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label _admin to host np0005604213.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:41:04 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label mon to host np0005604215.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Added label _admin to host np0005604215.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:04 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: Saving service mon spec with placement label:mon Feb 1 04:41:04 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:04 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:41:04 localhost ceph-mon[278949]: Deploying daemon mon.np0005604215 on np0005604215.localdomain Feb 1 04:41:04 localhost ceph-mon[278949]: mon.np0005604215@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Feb 1 04:41:04 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c2f70f91e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 1 04:41:06 localhost ceph-mon[278949]: mon.np0005604215@-1(probing) e4 my rank is now 3 (was -1) Feb 1 04:41:06 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:41:06 localhost ceph-mon[278949]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 Feb 1 04:41:06 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:06 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Feb 1 04:41:06 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Feb 1 04:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:41:07 localhost systemd[1]: tmp-crun.xkU4JE.mount: Deactivated successfully. Feb 1 04:41:07 localhost podman[278988]: 2026-02-01 09:41:07.902410992 +0000 UTC m=+0.086801212 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:41:07 localhost podman[278988]: 2026-02-01 09:41:07.915695134 +0000 UTC m=+0.100085314 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3) Feb 1 04:41:07 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:41:08 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:09 localhost ceph-mon[278949]: mgrc update_daemon_metadata mon.np0005604215 metadata {addrs=[v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005604215.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005604215.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116604,os=Linux} Feb 1 04:41:09 localhost ceph-mon[278949]: Deploying daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604209 calling monitor election Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604210 calling monitor election Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604209 is new leader, mons np0005604209,np0005604211,np0005604210,np0005604215 in quorum (ranks 0,1,2,3) Feb 1 04:41:09 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:41:09 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:09 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:09 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e4 handle_auth_request failed to assign global_id Feb 1 04:41:10 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:10 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.464392) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938870464502, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 9165, "num_deletes": 254, "total_data_size": 10211253, "memory_usage": 10523112, "flush_reason": "Manual Compaction"} Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938870520622, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 9045802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 9170, "table_properties": {"data_size": 8995028, "index_size": 27564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 233345, "raw_average_key_size": 26, "raw_value_size": 8843278, "raw_average_value_size": 1000, "num_data_blocks": 1059, "num_entries": 8841, "num_filter_entries": 8841, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938864, "oldest_key_time": 1769938864, "file_creation_time": 1769938870, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 56292 microseconds, and 20668 cpu microseconds. Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.520690) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 9045802 bytes OK Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.520717) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.522711) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.522734) EVENT_LOG_v1 {"time_micros": 1769938870522728, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.522751) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10147324, prev total WAL file size 10147324, number of live WAL files 2. Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.524478) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end) Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8833KB) 8(1887B)] Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938870524578, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 9047689, "oldest_snapshot_seqno": -1} Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 8591 keys, 9042432 bytes, temperature: kUnknown Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938870574091, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 9042432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8992313, "index_size": 27554, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 228827, "raw_average_key_size": 26, "raw_value_size": 8843808, "raw_average_value_size": 1029, "num_data_blocks": 1059, "num_entries": 8591, "num_filter_entries": 8591, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938864, "oldest_key_time": 0, "file_creation_time": 1769938870, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.574395) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 9042432 bytes Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.576091) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.5 rd, 182.4 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.6, 0.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 8846, records dropped: 255 output_compression: NoCompression Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.576120) EVENT_LOG_v1 {"time_micros": 1769938870576107, "job": 4, "event": "compaction_finished", "compaction_time_micros": 49576, "compaction_time_cpu_micros": 27734, "output_level": 6, "num_output_files": 1, "total_output_size": 9042432, "num_input_records": 8846, "num_output_records": 8591, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938870577431, "job": 4, "event": "table_file_deletion", "file_number": 14} Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938870577483, "job": 4, "event": "table_file_deletion", "file_number": 8} Feb 1 04:41:10 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:41:10.524325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:41:10 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Feb 1 04:41:10 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c2f70f8f20 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 1 04:41:11 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:41:11 localhost ceph-mon[278949]: paxos.3).electionLogic(18) init, last seen epoch 18 Feb 1 04:41:11 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:11 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:12 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Feb 1 04:41:12 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Feb 1 04:41:12 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Feb 1 04:41:14 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Feb 1 04:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:41:14 localhost podman[279008]: 2026-02-01 09:41:14.864621949 +0000 UTC m=+0.078820494 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:41:14 localhost podman[279008]: 2026-02-01 09:41:14.876635011 +0000 UTC m=+0.090833546 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:41:14 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604209 calling monitor election Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604210 calling monitor election Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604213 calling monitor election Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604209 is new leader, mons np0005604209,np0005604211,np0005604210,np0005604215,np0005604213 in quorum (ranks 0,1,2,3,4) Feb 1 04:41:16 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:41:16 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e5 handle_auth_request failed to assign global_id Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Feb 1 04:41:16 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c2f70f9600 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 1 04:41:16 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:41:16 localhost ceph-mon[278949]: paxos.3).electionLogic(22) init, last seen epoch 22 Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:16 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:17 localhost podman[279159]: 2026-02-01 09:41:17.741282825 +0000 UTC m=+0.092458938 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, maintainer=Guillaume Abrioux , release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., version=7, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z) Feb 1 04:41:17 localhost podman[279159]: 2026-02-01 09:41:17.849734436 +0000 UTC m=+0.200910589 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, release=1764794109, io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., ceph=True, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True) Feb 1 04:41:21 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:21 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:41:21 localhost ceph-mon[278949]: paxos.3).electionLogic(25) init, last seen epoch 25, mid-election, bumping Feb 1 04:41:21 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:21 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:21 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:21 localhost ceph-mon[278949]: mon.np0005604215@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:21 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604210 calling monitor election Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604213 calling monitor election Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604212 calling monitor election Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604210 calling monitor election Feb 1 04:41:22 localhost ceph-mon[278949]: Health check failed: 2/6 mons down, quorum np0005604209,np0005604211,np0005604210,np0005604215 (MON_DOWN) Feb 1 04:41:22 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604209 calling monitor election Feb 1 04:41:22 localhost ceph-mon[278949]: mon.np0005604209 is new leader, mons np0005604209,np0005604211,np0005604210,np0005604215,np0005604213,np0005604212 in quorum (ranks 0,1,2,3,4,5) Feb 1 04:41:22 localhost ceph-mon[278949]: Health check cleared: MON_DOWN (was: 2/6 mons down, quorum np0005604209,np0005604211,np0005604210,np0005604215) Feb 1 04:41:22 localhost ceph-mon[278949]: Cluster is now healthy Feb 1 04:41:22 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:41:22 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:22 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:22 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:22 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:22 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:22 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:22 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:22 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:22 localhost ceph-mon[278949]: Updating np0005604209.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:22 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:22 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:22 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:22 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:22 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:41:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:41:23 localhost podman[279562]: 2026-02-01 09:41:23.109435293 +0000 UTC m=+0.069693346 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:41:23 localhost podman[279562]: 2026-02-01 09:41:23.12269548 +0000 UTC m=+0.082953493 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:41:23 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:41:23 localhost podman[279561]: 2026-02-01 09:41:23.158668357 +0000 UTC m=+0.118453966 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 1 04:41:23 localhost podman[279561]: 2026-02-01 09:41:23.182083972 +0000 UTC m=+0.141869641 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:41:23 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:41:24 localhost ceph-mon[278949]: Updating np0005604209.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:24 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:24 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:24 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:24 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:24 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:24 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:24 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:25 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:41:25 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:25 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:25 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:25 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:41:26 localhost ceph-mon[278949]: Reconfiguring mon.np0005604209 (monmap changed)... Feb 1 04:41:26 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604209 on np0005604209.localdomain Feb 1 04:41:26 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:26 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:26 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604209.isqrps", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:27 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e6 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Feb 1 04:41:27 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.103:0/3951772484' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Feb 1 04:41:27 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604209.isqrps (monmap changed)... Feb 1 04:41:27 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604209.isqrps on np0005604209.localdomain Feb 1 04:41:27 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:27 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:27 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:27 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604209.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:28 localhost ceph-mon[278949]: Reconfiguring crash.np0005604209 (monmap changed)... Feb 1 04:41:28 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604209 on np0005604209.localdomain Feb 1 04:41:28 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:28 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:28 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604210.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:29 localhost ceph-mon[278949]: Reconfiguring crash.np0005604210 (monmap changed)... Feb 1 04:41:29 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604210 on np0005604210.localdomain Feb 1 04:41:29 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:29 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:29 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:41:30 localhost podman[236852]: time="2026-02-01T09:41:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:41:30 localhost podman[236852]: @ - - [01/Feb/2026:09:41:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:41:30 localhost podman[236852]: @ - - [01/Feb/2026:09:41:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17772 "" "Go-http-client/1.1" Feb 1 04:41:30 localhost ceph-mon[278949]: Reconfiguring mon.np0005604210 (monmap changed)... Feb 1 04:41:30 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604210 on np0005604210.localdomain Feb 1 04:41:30 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:30 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' Feb 1 04:41:30 localhost ceph-mon[278949]: from='mgr.14120 172.18.0.103:0/1843309307' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604210.rirrtk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:30 localhost ceph-mon[278949]: mon.np0005604215@3(peon).osd e81 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Feb 1 04:41:30 localhost ceph-mon[278949]: mon.np0005604215@3(peon).osd e81 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Feb 1 04:41:30 localhost ceph-mon[278949]: mon.np0005604215@3(peon).osd e82 e82: 6 total, 6 up, 6 in Feb 1 04:41:30 localhost systemd[1]: session-14.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-16.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-21.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-22.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-23.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-20.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-18.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-17.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-24.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-19.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-25.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-26.scope: Deactivated successfully. Feb 1 04:41:30 localhost systemd[1]: session-26.scope: Consumed 3min 16.410s CPU time. Feb 1 04:41:30 localhost systemd-logind[761]: Session 14 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 16 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 26 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 19 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 17 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 18 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 23 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 20 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 22 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 21 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 24 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Session 25 logged out. Waiting for processes to exit. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 14. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 16. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 21. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 22. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 23. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 20. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 18. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 17. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 24. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 19. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 25. Feb 1 04:41:30 localhost systemd-logind[761]: Removed session 26. Feb 1 04:41:30 localhost sshd[279983]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:41:30 localhost systemd-logind[761]: New session 64 of user ceph-admin. Feb 1 04:41:30 localhost systemd[1]: Started Session 64 of User ceph-admin. Feb 1 04:41:31 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604210.rirrtk (monmap changed)... Feb 1 04:41:31 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604210.rirrtk on np0005604210.localdomain Feb 1 04:41:31 localhost ceph-mon[278949]: from='client.? 172.18.0.103:0/3887042624' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:41:31 localhost ceph-mon[278949]: Activating manager daemon np0005604211.cuflqz Feb 1 04:41:31 localhost ceph-mon[278949]: from='client.? 172.18.0.103:0/3887042624' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 1 04:41:31 localhost ceph-mon[278949]: Manager daemon np0005604211.cuflqz is now available Feb 1 04:41:31 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604211.cuflqz/mirror_snapshot_schedule"} : dispatch Feb 1 04:41:31 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604211.cuflqz/mirror_snapshot_schedule"} : dispatch Feb 1 04:41:31 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604211.cuflqz/trash_purge_schedule"} : dispatch Feb 1 04:41:31 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604211.cuflqz/trash_purge_schedule"} : dispatch Feb 1 04:41:31 localhost openstack_network_exporter[239388]: ERROR 09:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:41:31 localhost openstack_network_exporter[239388]: Feb 1 04:41:31 localhost openstack_network_exporter[239388]: ERROR 09:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:41:31 localhost openstack_network_exporter[239388]: Feb 1 04:41:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:41:31 localhost systemd[1]: tmp-crun.xUrXqz.mount: Deactivated successfully. Feb 1 04:41:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:41:31 localhost podman[280068]: 2026-02-01 09:41:31.895596108 +0000 UTC m=+0.094253867 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, version=9.7, vcs-type=git, org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., distribution-scope=public) Feb 1 04:41:31 localhost podman[280068]: 2026-02-01 09:41:31.912659853 +0000 UTC m=+0.111317602 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, release=1769056855, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, maintainer=Red Hat, Inc.) Feb 1 04:41:31 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:41:32 localhost podman[280094]: 2026-02-01 09:41:32.037407946 +0000 UTC m=+0.132383833 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Feb 1 04:41:32 localhost podman[280094]: 2026-02-01 09:41:32.043109925 +0000 UTC m=+0.138085852 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 04:41:32 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:41:32 localhost podman[280131]: 2026-02-01 09:41:32.159274867 +0000 UTC m=+0.089049573 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, RELEASE=main) Feb 1 04:41:32 localhost podman[280131]: 2026-02-01 09:41:32.28910823 +0000 UTC m=+0.218882966 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:41:32 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:32 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: [01/Feb/2026:09:41:32] ENGINE Bus STARTING Feb 1 04:41:33 localhost ceph-mon[278949]: [01/Feb/2026:09:41:32] ENGINE Serving on https://172.18.0.105:7150 Feb 1 04:41:33 localhost ceph-mon[278949]: [01/Feb/2026:09:41:32] ENGINE Client ('172.18.0.105', 36928) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:41:33 localhost ceph-mon[278949]: [01/Feb/2026:09:41:32] ENGINE Serving on http://172.18.0.105:8765 Feb 1 04:41:33 localhost ceph-mon[278949]: [01/Feb/2026:09:41:32] ENGINE Bus STARTED Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:34 localhost ceph-mon[278949]: mon.np0005604215@3(peon).osd e82 _set_new_cache_sizes cache_size:1019840633 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd/host:np0005604210", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd/host:np0005604210", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd/host:np0005604209", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd/host:np0005604209", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:41:35 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:41:35 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:41:35 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:41:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:41:35 localhost ceph-mon[278949]: Updating np0005604209.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:35 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:35 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:35 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:35 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:35 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:41:36 localhost ceph-mon[278949]: Updating np0005604209.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:36 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:36 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:36 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:36 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:36 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:41:37 localhost nova_compute[274317]: 2026-02-01 09:41:37.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:37 localhost nova_compute[274317]: 2026-02-01 09:41:37.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:41:37 localhost nova_compute[274317]: 2026-02-01 09:41:37.102 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:41:37 localhost nova_compute[274317]: 2026-02-01 09:41:37.309 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604209.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604209.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:37 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:38 localhost nova_compute[274317]: 2026-02-01 09:41:38.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:38 localhost nova_compute[274317]: 2026-02-01 09:41:38.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:38 localhost nova_compute[274317]: 2026-02-01 09:41:38.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:38 localhost nova_compute[274317]: 2026-02-01 09:41:38.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:41:38 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604210.rirrtk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:38 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604210.rirrtk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:41:38 localhost podman[281031]: 2026-02-01 09:41:38.866715306 +0000 UTC m=+0.079432943 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:41:38 localhost podman[281031]: 2026-02-01 09:41:38.904926263 +0000 UTC m=+0.117643830 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127) Feb 1 04:41:38 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:41:39 localhost ceph-mon[278949]: mon.np0005604215@3(peon).osd e82 _set_new_cache_sizes cache_size:1020050934 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.234 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.234 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.235 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.235 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.235 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:41:39 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604210.rirrtk (monmap changed)... Feb 1 04:41:39 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604210.rirrtk on np0005604210.localdomain Feb 1 04:41:39 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:39 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:39 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:41:39 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:41:39 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3704204462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.690 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.906 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.908 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12432MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.909 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:41:39 localhost nova_compute[274317]: 2026-02-01 09:41:39.909 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:41:40 localhost nova_compute[274317]: 2026-02-01 09:41:40.000 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:41:40 localhost nova_compute[274317]: 2026-02-01 09:41:40.001 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:41:40 localhost nova_compute[274317]: 2026-02-01 09:41:40.028 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:41:40 localhost nova_compute[274317]: 2026-02-01 09:41:40.474 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:41:40 localhost nova_compute[274317]: 2026-02-01 09:41:40.480 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:41:40 localhost nova_compute[274317]: 2026-02-01 09:41:40.504 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:41:40 localhost nova_compute[274317]: 2026-02-01 09:41:40.506 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:41:40 localhost nova_compute[274317]: 2026-02-01 09:41:40.507 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:41:40 localhost ceph-mon[278949]: Reconfiguring mon.np0005604211 (monmap changed)... Feb 1 04:41:40 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604211 on np0005604211.localdomain Feb 1 04:41:40 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:40 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:40 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:40 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:41 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:41:41 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:41:41 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:41 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:41 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:41 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:41 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:41 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:41 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:41 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:41 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:41:41.758 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:41:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:41:41.758 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:41:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:41:41.758 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:41:42 localhost nova_compute[274317]: 2026-02-01 09:41:42.502 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:42 localhost nova_compute[274317]: 2026-02-01 09:41:42.502 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:42 localhost nova_compute[274317]: 2026-02-01 09:41:42.522 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:42 localhost nova_compute[274317]: 2026-02-01 09:41:42.523 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:41:42 localhost ceph-mon[278949]: Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:41:42 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:41:42 localhost ceph-mon[278949]: Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:41:42 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:41:42 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:42 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:42 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:41:43 localhost ceph-mon[278949]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:41:43 localhost ceph-mon[278949]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:41:43 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:43 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:43 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:41:44 localhost ceph-mon[278949]: mon.np0005604215@3(peon).osd e82 _set_new_cache_sizes cache_size:1020054664 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:41:44 localhost ceph-mon[278949]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:41:44 localhost ceph-mon[278949]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:41:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:44 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:41:44 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:41:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:41:44 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:41:45 localhost podman[281094]: 2026-02-01 09:41:45.857541409 +0000 UTC m=+0.071469092 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:41:45 localhost podman[281094]: 2026-02-01 09:41:45.866450848 +0000 UTC m=+0.080378532 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:41:45 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:41:46 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:41:46 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1423703' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:41:46 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c2f70f9080 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Feb 1 04:41:46 localhost ceph-mon[278949]: mon.np0005604215@3(peon) e7 my rank is now 2 (was 3) Feb 1 04:41:46 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 1 04:41:46 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 1 04:41:46 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c2f70f9600 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Feb 1 04:41:46 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:41:46 localhost ceph-mon[278949]: paxos.2).electionLogic(28) init, last seen epoch 28 Feb 1 04:41:46 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:46 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:49 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e7 handle_auth_request failed to assign global_id Feb 1 04:41:49 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e7 handle_auth_request failed to assign global_id Feb 1 04:41:50 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e7 handle_auth_request failed to assign global_id Feb 1 04:41:50 localhost ceph-mds[276952]: mds.beacon.mds.np0005604215.rwvxvg missed beacon ack from the monitors Feb 1 04:41:50 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e7 handle_auth_request failed to assign global_id Feb 1 04:41:51 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: paxos.2).electionLogic(31) init, last seen epoch 31, mid-election, bumping Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:41:51 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:41:51 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:51 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:41:51 localhost ceph-mon[278949]: Remove daemons mon.np0005604209 Feb 1 04:41:51 localhost ceph-mon[278949]: Reconfiguring mon.np0005604212 (monmap changed)... Feb 1 04:41:51 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:41:51 localhost ceph-mon[278949]: Safe to remove mon.np0005604209: new quorum should be ['np0005604211', 'np0005604210', 'np0005604215', 'np0005604213', 'np0005604212'] (from ['np0005604211', 'np0005604210', 'np0005604215', 'np0005604213', 'np0005604212']) Feb 1 04:41:51 localhost ceph-mon[278949]: Removing monitor np0005604209 from monmap... Feb 1 04:41:51 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "mon rm", "name": "np0005604209"} : dispatch Feb 1 04:41:51 localhost ceph-mon[278949]: Removing daemon mon.np0005604209 from np0005604209.localdomain -- ports [] Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604212 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604210 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604213 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604211 is new leader, mons np0005604211,np0005604210,np0005604215,np0005604213 in quorum (ranks 0,1,2,3) Feb 1 04:41:51 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604210 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604213 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:41:51 localhost ceph-mon[278949]: mon.np0005604211 is new leader, mons np0005604211,np0005604210,np0005604215,np0005604213,np0005604212 in quorum (ranks 0,1,2,3,4) Feb 1 04:41:51 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:41:52 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:41:52 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:52 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:52 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:52 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:52 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:53 localhost ceph-mon[278949]: Removed label mon from host np0005604209.localdomain Feb 1 04:41:53 localhost ceph-mon[278949]: Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:41:53 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:41:53 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:53 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:53 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:41:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:41:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:41:53 localhost podman[281118]: 2026-02-01 09:41:53.873380571 +0000 UTC m=+0.088014781 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 1 04:41:53 localhost systemd[1]: tmp-crun.2ZAZDq.mount: Deactivated successfully. Feb 1 04:41:53 localhost podman[281118]: 2026-02-01 09:41:53.93232183 +0000 UTC m=+0.146956060 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true) Feb 1 04:41:53 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:41:53 localhost podman[281119]: 2026-02-01 09:41:53.934754156 +0000 UTC m=+0.144414850 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:41:54 localhost podman[281119]: 2026-02-01 09:41:54.015015573 +0000 UTC m=+0.224676287 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:41:54 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:41:54 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:41:54 localhost ceph-mon[278949]: Reconfiguring osd.0 (monmap changed)... Feb 1 04:41:54 localhost ceph-mon[278949]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:41:54 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:54 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:54 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:54 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:41:55 localhost ceph-mon[278949]: Removed label mgr from host np0005604209.localdomain Feb 1 04:41:55 localhost ceph-mon[278949]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:41:55 localhost ceph-mon[278949]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:41:55 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:55 localhost ceph-mon[278949]: Removed label _admin from host np0005604209.localdomain Feb 1 04:41:55 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:55 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:55 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:41:55 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:41:55 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:41:55 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:41:57 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:57 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:57 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:41:57 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:57 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:41:57 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:41:57 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:57 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:57 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:41:58 localhost ceph-mon[278949]: Reconfiguring mon.np0005604213 (monmap changed)... Feb 1 04:41:58 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:41:58 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:58 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:58 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:58 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:41:58 localhost podman[281219]: Feb 1 04:41:59 localhost podman[281219]: 2026-02-01 09:41:58.999957819 +0000 UTC m=+0.077076959 container create b3b9e8e5daa73087f0d73108a5e6374b4903cd8b5e28492b78f720f845b44ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_banach, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, release=1764794109, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:41:59 localhost systemd[1]: Started libpod-conmon-b3b9e8e5daa73087f0d73108a5e6374b4903cd8b5e28492b78f720f845b44ce5.scope. Feb 1 04:41:59 localhost systemd[1]: Started libcrun container. Feb 1 04:41:59 localhost podman[281219]: 2026-02-01 09:41:58.968161921 +0000 UTC m=+0.045281091 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:41:59 localhost podman[281219]: 2026-02-01 09:41:59.071353537 +0000 UTC m=+0.148472677 container init b3b9e8e5daa73087f0d73108a5e6374b4903cd8b5e28492b78f720f845b44ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_banach, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, release=1764794109, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, build-date=2025-12-08T17:28:53Z, vcs-type=git, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc.) Feb 1 04:41:59 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:41:59 localhost podman[281219]: 2026-02-01 09:41:59.082122605 +0000 UTC m=+0.159241745 container start b3b9e8e5daa73087f0d73108a5e6374b4903cd8b5e28492b78f720f845b44ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_banach, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.buildah.version=1.41.4, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, release=1764794109) Feb 1 04:41:59 localhost podman[281219]: 2026-02-01 09:41:59.082372873 +0000 UTC m=+0.159492083 container attach b3b9e8e5daa73087f0d73108a5e6374b4903cd8b5e28492b78f720f845b44ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_banach, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container) Feb 1 04:41:59 localhost distracted_banach[281234]: 167 167 Feb 1 04:41:59 localhost systemd[1]: libpod-b3b9e8e5daa73087f0d73108a5e6374b4903cd8b5e28492b78f720f845b44ce5.scope: Deactivated successfully. Feb 1 04:41:59 localhost podman[281219]: 2026-02-01 09:41:59.088221386 +0000 UTC m=+0.165340556 container died b3b9e8e5daa73087f0d73108a5e6374b4903cd8b5e28492b78f720f845b44ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_banach, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, name=rhceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, vcs-type=git, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:41:59 localhost podman[281239]: 2026-02-01 09:41:59.183869846 +0000 UTC m=+0.082993133 container remove b3b9e8e5daa73087f0d73108a5e6374b4903cd8b5e28492b78f720f845b44ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_banach, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, name=rhceph, distribution-scope=public, release=1764794109, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:41:59 localhost systemd[1]: libpod-conmon-b3b9e8e5daa73087f0d73108a5e6374b4903cd8b5e28492b78f720f845b44ce5.scope: Deactivated successfully. Feb 1 04:41:59 localhost ceph-mon[278949]: Reconfiguring crash.np0005604215 (monmap changed)... Feb 1 04:41:59 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604215 on np0005604215.localdomain Feb 1 04:41:59 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:59 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:41:59 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 1 04:41:59 localhost podman[281309]: Feb 1 04:41:59 localhost podman[281309]: 2026-02-01 09:41:59.923268376 +0000 UTC m=+0.073503936 container create 2d8b18ccf3d06487dc33a3343896bb27f193e5e719609cccb6ad1e9537ab8508 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_grothendieck, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, distribution-scope=public, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True) Feb 1 04:41:59 localhost systemd[1]: Started libpod-conmon-2d8b18ccf3d06487dc33a3343896bb27f193e5e719609cccb6ad1e9537ab8508.scope. Feb 1 04:41:59 localhost systemd[1]: Started libcrun container. Feb 1 04:41:59 localhost podman[281309]: 2026-02-01 09:41:59.982676399 +0000 UTC m=+0.132911949 container init 2d8b18ccf3d06487dc33a3343896bb27f193e5e719609cccb6ad1e9537ab8508 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_grothendieck, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:41:59 localhost podman[281309]: 2026-02-01 09:41:59.991592818 +0000 UTC m=+0.141828398 container start 2d8b18ccf3d06487dc33a3343896bb27f193e5e719609cccb6ad1e9537ab8508 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_grothendieck, CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , version=7, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z) Feb 1 04:41:59 localhost podman[281309]: 2026-02-01 09:41:59.991854587 +0000 UTC m=+0.142090187 container attach 2d8b18ccf3d06487dc33a3343896bb27f193e5e719609cccb6ad1e9537ab8508 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_grothendieck, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, release=1764794109) Feb 1 04:41:59 localhost sleepy_grothendieck[281324]: 167 167 Feb 1 04:41:59 localhost podman[281309]: 2026-02-01 09:41:59.894155993 +0000 UTC m=+0.044391573 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:41:59 localhost systemd[1]: libpod-2d8b18ccf3d06487dc33a3343896bb27f193e5e719609cccb6ad1e9537ab8508.scope: Deactivated successfully. Feb 1 04:41:59 localhost podman[281309]: 2026-02-01 09:41:59.995230583 +0000 UTC m=+0.145466163 container died 2d8b18ccf3d06487dc33a3343896bb27f193e5e719609cccb6ad1e9537ab8508 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_grothendieck, build-date=2025-12-08T17:28:53Z, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container) Feb 1 04:42:00 localhost systemd[1]: var-lib-containers-storage-overlay-46a77e1510c16e43830f36acb27cc3f318464f3519063eb1bd7052dd15448178-merged.mount: Deactivated successfully. Feb 1 04:42:00 localhost podman[236852]: time="2026-02-01T09:42:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:42:00 localhost podman[236852]: @ - - [01/Feb/2026:09:42:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155373 "" "Go-http-client/1.1" Feb 1 04:42:00 localhost podman[236852]: @ - - [01/Feb/2026:09:42:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18100 "" "Go-http-client/1.1" Feb 1 04:42:00 localhost systemd[1]: var-lib-containers-storage-overlay-6674a12f290f9201b921bd86a590aff441f2e681786a5a1d3a45e45e7982e135-merged.mount: Deactivated successfully. Feb 1 04:42:00 localhost podman[281329]: 2026-02-01 09:42:00.152120724 +0000 UTC m=+0.144228075 container remove 2d8b18ccf3d06487dc33a3343896bb27f193e5e719609cccb6ad1e9537ab8508 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_grothendieck, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Feb 1 04:42:00 localhost systemd[1]: libpod-conmon-2d8b18ccf3d06487dc33a3343896bb27f193e5e719609cccb6ad1e9537ab8508.scope: Deactivated successfully. Feb 1 04:42:00 localhost ceph-mon[278949]: Reconfiguring osd.2 (monmap changed)... Feb 1 04:42:00 localhost ceph-mon[278949]: Reconfiguring daemon osd.2 on np0005604215.localdomain Feb 1 04:42:00 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:00 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:00 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 1 04:42:00 localhost podman[281404]: Feb 1 04:42:00 localhost podman[281404]: 2026-02-01 09:42:00.9676216 +0000 UTC m=+0.068845270 container create 71e58449e65df3eb97ebfaf25bf91b2176b9a2c7800d81b31e7b41ef229e86c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_edison, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:42:01 localhost systemd[1]: Started libpod-conmon-71e58449e65df3eb97ebfaf25bf91b2176b9a2c7800d81b31e7b41ef229e86c3.scope. Feb 1 04:42:01 localhost systemd[1]: Started libcrun container. Feb 1 04:42:01 localhost podman[281404]: 2026-02-01 09:42:01.032535016 +0000 UTC m=+0.133758676 container init 71e58449e65df3eb97ebfaf25bf91b2176b9a2c7800d81b31e7b41ef229e86c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_edison, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, release=1764794109, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public) Feb 1 04:42:01 localhost podman[281404]: 2026-02-01 09:42:00.937028201 +0000 UTC m=+0.038251941 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:01 localhost podman[281404]: 2026-02-01 09:42:01.043119128 +0000 UTC m=+0.144342788 container start 71e58449e65df3eb97ebfaf25bf91b2176b9a2c7800d81b31e7b41ef229e86c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_edison, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1764794109, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7) Feb 1 04:42:01 localhost podman[281404]: 2026-02-01 09:42:01.043374196 +0000 UTC m=+0.144597856 container attach 71e58449e65df3eb97ebfaf25bf91b2176b9a2c7800d81b31e7b41ef229e86c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_edison, io.buildah.version=1.41.4, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, architecture=x86_64) Feb 1 04:42:01 localhost unruffled_edison[281419]: 167 167 Feb 1 04:42:01 localhost systemd[1]: libpod-71e58449e65df3eb97ebfaf25bf91b2176b9a2c7800d81b31e7b41ef229e86c3.scope: Deactivated successfully. Feb 1 04:42:01 localhost podman[281404]: 2026-02-01 09:42:01.047191266 +0000 UTC m=+0.148414956 container died 71e58449e65df3eb97ebfaf25bf91b2176b9a2c7800d81b31e7b41ef229e86c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_edison, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:42:01 localhost podman[281424]: 2026-02-01 09:42:01.143049753 +0000 UTC m=+0.084103290 container remove 71e58449e65df3eb97ebfaf25bf91b2176b9a2c7800d81b31e7b41ef229e86c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_edison, architecture=x86_64, io.buildah.version=1.41.4, release=1764794109, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph) Feb 1 04:42:01 localhost systemd[1]: libpod-conmon-71e58449e65df3eb97ebfaf25bf91b2176b9a2c7800d81b31e7b41ef229e86c3.scope: Deactivated successfully. Feb 1 04:42:01 localhost ceph-mon[278949]: Reconfiguring osd.5 (monmap changed)... Feb 1 04:42:01 localhost ceph-mon[278949]: Reconfiguring daemon osd.5 on np0005604215.localdomain Feb 1 04:42:01 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:01 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:01 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:01 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:01 localhost openstack_network_exporter[239388]: ERROR 09:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:42:01 localhost openstack_network_exporter[239388]: Feb 1 04:42:01 localhost openstack_network_exporter[239388]: ERROR 09:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:42:01 localhost openstack_network_exporter[239388]: Feb 1 04:42:01 localhost podman[281503]: Feb 1 04:42:01 localhost podman[281503]: 2026-02-01 09:42:01.993614279 +0000 UTC m=+0.076650495 container create 108785bf21bd35833b987e91acec7233c8291f086b547ed513d99118a6607434 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_dirac, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, io.buildah.version=1.41.4, release=1764794109, ceph=True, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, io.openshift.expose-services=) Feb 1 04:42:02 localhost systemd[1]: var-lib-containers-storage-overlay-0ebf2d8067f95cd436c99c0f920b1b3492f01548588d943565faf7dc56cd8af5-merged.mount: Deactivated successfully. Feb 1 04:42:02 localhost systemd[1]: Started libpod-conmon-108785bf21bd35833b987e91acec7233c8291f086b547ed513d99118a6607434.scope. Feb 1 04:42:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:42:02 localhost systemd[1]: Started libcrun container. Feb 1 04:42:02 localhost podman[281503]: 2026-02-01 09:42:01.95986022 +0000 UTC m=+0.042896506 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:02 localhost podman[281503]: 2026-02-01 09:42:02.061330072 +0000 UTC m=+0.144366288 container init 108785bf21bd35833b987e91acec7233c8291f086b547ed513d99118a6607434 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_dirac, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:42:02 localhost podman[281503]: 2026-02-01 09:42:02.071328286 +0000 UTC m=+0.154364502 container start 108785bf21bd35833b987e91acec7233c8291f086b547ed513d99118a6607434 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_dirac, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:42:02 localhost podman[281503]: 2026-02-01 09:42:02.071516172 +0000 UTC m=+0.154552388 container attach 108785bf21bd35833b987e91acec7233c8291f086b547ed513d99118a6607434 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_dirac, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-12-08T17:28:53Z, release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, io.openshift.expose-services=, ceph=True) Feb 1 04:42:02 localhost hopeful_dirac[281518]: 167 167 Feb 1 04:42:02 localhost podman[281503]: 2026-02-01 09:42:02.076713845 +0000 UTC m=+0.159750091 container died 108785bf21bd35833b987e91acec7233c8291f086b547ed513d99118a6607434 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_dirac, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7) Feb 1 04:42:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:42:02 localhost systemd[1]: libpod-108785bf21bd35833b987e91acec7233c8291f086b547ed513d99118a6607434.scope: Deactivated successfully. Feb 1 04:42:02 localhost podman[281519]: 2026-02-01 09:42:02.126006751 +0000 UTC m=+0.081108385 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9/ubi-minimal, version=9.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9) Feb 1 04:42:02 localhost podman[281533]: 2026-02-01 09:42:02.181555143 +0000 UTC m=+0.091823621 container remove 108785bf21bd35833b987e91acec7233c8291f086b547ed513d99118a6607434 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_dirac, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:42:02 localhost systemd[1]: libpod-conmon-108785bf21bd35833b987e91acec7233c8291f086b547ed513d99118a6607434.scope: Deactivated successfully. Feb 1 04:42:02 localhost podman[281519]: 2026-02-01 09:42:02.22132324 +0000 UTC m=+0.176424944 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-22T05:09:47Z, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.buildah.version=1.33.7, release=1769056855, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 1 04:42:02 localhost podman[281535]: 2026-02-01 09:42:02.23373935 +0000 UTC m=+0.138246886 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 1 04:42:02 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:42:02 localhost podman[281535]: 2026-02-01 09:42:02.244659653 +0000 UTC m=+0.149167239 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:42:02 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:42:02 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:42:02 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:42:02 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:02 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:02 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:02 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:02 localhost podman[281628]: Feb 1 04:42:02 localhost podman[281628]: 2026-02-01 09:42:02.934615242 +0000 UTC m=+0.076632674 container create 841ab84f74b3b0de7505308422686f614e6815e2289162b730cc0dab5c3a1e71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hellman, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True) Feb 1 04:42:02 localhost systemd[1]: Started libpod-conmon-841ab84f74b3b0de7505308422686f614e6815e2289162b730cc0dab5c3a1e71.scope. Feb 1 04:42:02 localhost systemd[1]: Started libcrun container. Feb 1 04:42:02 localhost podman[281628]: 2026-02-01 09:42:02.992842548 +0000 UTC m=+0.134859980 container init 841ab84f74b3b0de7505308422686f614e6815e2289162b730cc0dab5c3a1e71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hellman, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:42:03 localhost interesting_hellman[281643]: 167 167 Feb 1 04:42:03 localhost podman[281628]: 2026-02-01 09:42:03.001686865 +0000 UTC m=+0.143704267 container start 841ab84f74b3b0de7505308422686f614e6815e2289162b730cc0dab5c3a1e71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hellman, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=1764794109, distribution-scope=public, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:42:03 localhost systemd[1]: libpod-841ab84f74b3b0de7505308422686f614e6815e2289162b730cc0dab5c3a1e71.scope: Deactivated successfully. Feb 1 04:42:03 localhost podman[281628]: 2026-02-01 09:42:03.002317475 +0000 UTC m=+0.144334877 container attach 841ab84f74b3b0de7505308422686f614e6815e2289162b730cc0dab5c3a1e71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hellman, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , name=rhceph, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:42:03 localhost podman[281628]: 2026-02-01 09:42:02.903820636 +0000 UTC m=+0.045838118 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:03 localhost podman[281628]: 2026-02-01 09:42:03.004598737 +0000 UTC m=+0.146616149 container died 841ab84f74b3b0de7505308422686f614e6815e2289162b730cc0dab5c3a1e71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hellman, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Feb 1 04:42:03 localhost systemd[1]: var-lib-containers-storage-overlay-c1f216f62cdb19cadf090a54847194e48cb113d5fef9fbfb597031b2e739ef86-merged.mount: Deactivated successfully. Feb 1 04:42:03 localhost systemd[1]: var-lib-containers-storage-overlay-15da0d3557ccd159f916e75820219bc59467909e0f5842b3e4ca825e652ea28b-merged.mount: Deactivated successfully. Feb 1 04:42:03 localhost podman[281648]: 2026-02-01 09:42:03.092565205 +0000 UTC m=+0.081119195 container remove 841ab84f74b3b0de7505308422686f614e6815e2289162b730cc0dab5c3a1e71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_hellman, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, build-date=2025-12-08T17:28:53Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, vcs-type=git) Feb 1 04:42:03 localhost systemd[1]: libpod-conmon-841ab84f74b3b0de7505308422686f614e6815e2289162b730cc0dab5c3a1e71.scope: Deactivated successfully. Feb 1 04:42:03 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604215.uhhqtv (monmap changed)... Feb 1 04:42:03 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604215.uhhqtv on np0005604215.localdomain Feb 1 04:42:03 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:03 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:03 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:42:03 localhost podman[281717]: Feb 1 04:42:03 localhost podman[281717]: 2026-02-01 09:42:03.793680395 +0000 UTC m=+0.080256388 container create 9cec727ba05020d1431722bb5825354416bbd11e3f3bbdbaee93d7477c23e401 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_mendeleev, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:42:03 localhost systemd[1]: Started libpod-conmon-9cec727ba05020d1431722bb5825354416bbd11e3f3bbdbaee93d7477c23e401.scope. Feb 1 04:42:03 localhost systemd[1]: Started libcrun container. Feb 1 04:42:03 localhost podman[281717]: 2026-02-01 09:42:03.854800582 +0000 UTC m=+0.141376545 container init 9cec727ba05020d1431722bb5825354416bbd11e3f3bbdbaee93d7477c23e401 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_mendeleev, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, release=1764794109, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:42:03 localhost podman[281717]: 2026-02-01 09:42:03.762385244 +0000 UTC m=+0.048961247 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:03 localhost podman[281717]: 2026-02-01 09:42:03.863789254 +0000 UTC m=+0.150365237 container start 9cec727ba05020d1431722bb5825354416bbd11e3f3bbdbaee93d7477c23e401 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_mendeleev, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, RELEASE=main, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109) Feb 1 04:42:03 localhost podman[281717]: 2026-02-01 09:42:03.864049982 +0000 UTC m=+0.150625945 container attach 9cec727ba05020d1431722bb5825354416bbd11e3f3bbdbaee93d7477c23e401 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_mendeleev, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=7, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main) Feb 1 04:42:03 localhost boring_mendeleev[281732]: 167 167 Feb 1 04:42:03 localhost systemd[1]: libpod-9cec727ba05020d1431722bb5825354416bbd11e3f3bbdbaee93d7477c23e401.scope: Deactivated successfully. Feb 1 04:42:03 localhost podman[281717]: 2026-02-01 09:42:03.870633708 +0000 UTC m=+0.157209711 container died 9cec727ba05020d1431722bb5825354416bbd11e3f3bbdbaee93d7477c23e401 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_mendeleev, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:42:03 localhost podman[281737]: 2026-02-01 09:42:03.963027686 +0000 UTC m=+0.084944235 container remove 9cec727ba05020d1431722bb5825354416bbd11e3f3bbdbaee93d7477c23e401 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_mendeleev, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, name=rhceph, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:42:03 localhost systemd[1]: libpod-conmon-9cec727ba05020d1431722bb5825354416bbd11e3f3bbdbaee93d7477c23e401.scope: Deactivated successfully. Feb 1 04:42:04 localhost systemd[1]: var-lib-containers-storage-overlay-fd7138d88f859902d472c0727c0dcec9e804da4e7fdf15a3be4a2442575f3842-merged.mount: Deactivated successfully. Feb 1 04:42:04 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:42:04 localhost ceph-mon[278949]: Reconfiguring mon.np0005604215 (monmap changed)... Feb 1 04:42:04 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604215 on np0005604215.localdomain Feb 1 04:42:04 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:04 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:04 localhost podman[281830]: Feb 1 04:42:04 localhost podman[281830]: 2026-02-01 09:42:04.806423168 +0000 UTC m=+0.072188966 container create 8220110df24105aca26072148fe992cfe6b42290b9d9909bc4da026476d0bcb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_hertz, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1764794109, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:42:04 localhost systemd[1]: Started libpod-conmon-8220110df24105aca26072148fe992cfe6b42290b9d9909bc4da026476d0bcb9.scope. Feb 1 04:42:04 localhost systemd[1]: Started libcrun container. Feb 1 04:42:04 localhost podman[281830]: 2026-02-01 09:42:04.867643238 +0000 UTC m=+0.133409036 container init 8220110df24105aca26072148fe992cfe6b42290b9d9909bc4da026476d0bcb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_hertz, distribution-scope=public, io.buildah.version=1.41.4, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, name=rhceph, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:42:04 localhost podman[281830]: 2026-02-01 09:42:04.776626934 +0000 UTC m=+0.042392792 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:04 localhost podman[281830]: 2026-02-01 09:42:04.880973006 +0000 UTC m=+0.146738804 container start 8220110df24105aca26072148fe992cfe6b42290b9d9909bc4da026476d0bcb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_hertz, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux , RELEASE=main, release=1764794109, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, version=7) Feb 1 04:42:04 localhost podman[281830]: 2026-02-01 09:42:04.881491772 +0000 UTC m=+0.147257580 container attach 8220110df24105aca26072148fe992cfe6b42290b9d9909bc4da026476d0bcb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_hertz, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:42:04 localhost intelligent_hertz[281845]: 167 167 Feb 1 04:42:04 localhost systemd[1]: libpod-8220110df24105aca26072148fe992cfe6b42290b9d9909bc4da026476d0bcb9.scope: Deactivated successfully. Feb 1 04:42:04 localhost podman[281830]: 2026-02-01 09:42:04.885346433 +0000 UTC m=+0.151112231 container died 8220110df24105aca26072148fe992cfe6b42290b9d9909bc4da026476d0bcb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_hertz, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, GIT_CLEAN=True, version=7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, name=rhceph, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:42:04 localhost podman[281850]: 2026-02-01 09:42:04.973689384 +0000 UTC m=+0.079317738 container remove 8220110df24105aca26072148fe992cfe6b42290b9d9909bc4da026476d0bcb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_hertz, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-type=git, name=rhceph, version=7, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7) Feb 1 04:42:04 localhost systemd[1]: libpod-conmon-8220110df24105aca26072148fe992cfe6b42290b9d9909bc4da026476d0bcb9.scope: Deactivated successfully. Feb 1 04:42:05 localhost systemd[1]: tmp-crun.ysYJMn.mount: Deactivated successfully. Feb 1 04:42:05 localhost systemd[1]: var-lib-containers-storage-overlay-060c0c0947e7821dcdb0bccdb08858632da93356492bf9811bae79228f2e5d5c-merged.mount: Deactivated successfully. Feb 1 04:42:05 localhost podman[281872]: Feb 1 04:42:05 localhost podman[281872]: 2026-02-01 09:42:05.19197038 +0000 UTC m=+0.077840132 container create e6325582f06e898ac32fb4a8cc3bab54315d9270b3cf4c91c489ff48572ef9e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_carver, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1764794109, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:42:05 localhost systemd[1]: Started libpod-conmon-e6325582f06e898ac32fb4a8cc3bab54315d9270b3cf4c91c489ff48572ef9e7.scope. Feb 1 04:42:05 localhost systemd[1]: Started libcrun container. Feb 1 04:42:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb9f9684541b0544fdc9533088585cb3613da05aa104a70a2f14ab297a183cc/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 1 04:42:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb9f9684541b0544fdc9533088585cb3613da05aa104a70a2f14ab297a183cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 04:42:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb9f9684541b0544fdc9533088585cb3613da05aa104a70a2f14ab297a183cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 04:42:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb9f9684541b0544fdc9533088585cb3613da05aa104a70a2f14ab297a183cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 04:42:05 localhost podman[281872]: 2026-02-01 09:42:05.254920084 +0000 UTC m=+0.140789816 container init e6325582f06e898ac32fb4a8cc3bab54315d9270b3cf4c91c489ff48572ef9e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_carver, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, GIT_CLEAN=True, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:42:05 localhost podman[281872]: 2026-02-01 09:42:05.159728858 +0000 UTC m=+0.045598670 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:05 localhost podman[281872]: 2026-02-01 09:42:05.271863915 +0000 UTC m=+0.157733677 container start e6325582f06e898ac32fb4a8cc3bab54315d9270b3cf4c91c489ff48572ef9e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_carver, release=1764794109, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Feb 1 04:42:05 localhost podman[281872]: 2026-02-01 09:42:05.272222436 +0000 UTC m=+0.158092188 container attach e6325582f06e898ac32fb4a8cc3bab54315d9270b3cf4c91c489ff48572ef9e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_carver, release=1764794109, io.buildah.version=1.41.4, name=rhceph, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.openshift.expose-services=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7) Feb 1 04:42:05 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:05 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:06 localhost systemd[1]: tmp-crun.eEe5vm.mount: Deactivated successfully. Feb 1 04:42:06 localhost great_carver[281887]: [ Feb 1 04:42:06 localhost great_carver[281887]: { Feb 1 04:42:06 localhost great_carver[281887]: "available": false, Feb 1 04:42:06 localhost great_carver[281887]: "ceph_device": false, Feb 1 04:42:06 localhost great_carver[281887]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 1 04:42:06 localhost great_carver[281887]: "lsm_data": {}, Feb 1 04:42:06 localhost great_carver[281887]: "lvs": [], Feb 1 04:42:06 localhost great_carver[281887]: "path": "/dev/sr0", Feb 1 04:42:06 localhost great_carver[281887]: "rejected_reasons": [ Feb 1 04:42:06 localhost great_carver[281887]: "Insufficient space (<5GB)", Feb 1 04:42:06 localhost great_carver[281887]: "Has a FileSystem" Feb 1 04:42:06 localhost great_carver[281887]: ], Feb 1 04:42:06 localhost great_carver[281887]: "sys_api": { Feb 1 04:42:06 localhost great_carver[281887]: "actuators": null, Feb 1 04:42:06 localhost great_carver[281887]: "device_nodes": "sr0", Feb 1 04:42:06 localhost great_carver[281887]: "human_readable_size": "482.00 KB", Feb 1 04:42:06 localhost great_carver[281887]: "id_bus": "ata", Feb 1 04:42:06 localhost great_carver[281887]: "model": "QEMU DVD-ROM", Feb 1 04:42:06 localhost great_carver[281887]: "nr_requests": "2", Feb 1 04:42:06 localhost great_carver[281887]: "partitions": {}, Feb 1 04:42:06 localhost great_carver[281887]: "path": "/dev/sr0", Feb 1 04:42:06 localhost great_carver[281887]: "removable": "1", Feb 1 04:42:06 localhost great_carver[281887]: "rev": "2.5+", Feb 1 04:42:06 localhost great_carver[281887]: "ro": "0", Feb 1 04:42:06 localhost great_carver[281887]: "rotational": "1", Feb 1 04:42:06 localhost great_carver[281887]: "sas_address": "", Feb 1 04:42:06 localhost great_carver[281887]: "sas_device_handle": "", Feb 1 04:42:06 localhost great_carver[281887]: "scheduler_mode": "mq-deadline", Feb 1 04:42:06 localhost great_carver[281887]: "sectors": 0, Feb 1 04:42:06 localhost great_carver[281887]: "sectorsize": "2048", Feb 1 04:42:06 localhost great_carver[281887]: "size": 493568.0, Feb 1 04:42:06 localhost great_carver[281887]: "support_discard": "0", Feb 1 04:42:06 localhost great_carver[281887]: "type": "disk", Feb 1 04:42:06 localhost great_carver[281887]: "vendor": "QEMU" Feb 1 04:42:06 localhost great_carver[281887]: } Feb 1 04:42:06 localhost great_carver[281887]: } Feb 1 04:42:06 localhost great_carver[281887]: ] Feb 1 04:42:06 localhost systemd[1]: libpod-e6325582f06e898ac32fb4a8cc3bab54315d9270b3cf4c91c489ff48572ef9e7.scope: Deactivated successfully. Feb 1 04:42:06 localhost podman[281872]: 2026-02-01 09:42:06.210153733 +0000 UTC m=+1.096023535 container died e6325582f06e898ac32fb4a8cc3bab54315d9270b3cf4c91c489ff48572ef9e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_carver, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., release=1764794109, ceph=True, version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public) Feb 1 04:42:06 localhost systemd[1]: var-lib-containers-storage-overlay-6bb9f9684541b0544fdc9533088585cb3613da05aa104a70a2f14ab297a183cc-merged.mount: Deactivated successfully. Feb 1 04:42:06 localhost podman[283544]: 2026-02-01 09:42:06.290447602 +0000 UTC m=+0.072698621 container remove e6325582f06e898ac32fb4a8cc3bab54315d9270b3cf4c91c489ff48572ef9e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_carver, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, version=7, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git) Feb 1 04:42:06 localhost systemd[1]: libpod-conmon-e6325582f06e898ac32fb4a8cc3bab54315d9270b3cf4c91c489ff48572ef9e7.scope: Deactivated successfully. Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:07 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: Removing np0005604209.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Removing np0005604209.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:42:08 localhost ceph-mon[278949]: Added label _no_schedule to host np0005604209.localdomain Feb 1 04:42:08 localhost ceph-mon[278949]: Removing np0005604209.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:42:08 localhost ceph-mon[278949]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005604209.localdomain Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:08 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:09 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:42:09 localhost ceph-mon[278949]: Removing daemon crash.np0005604209 from np0005604209.localdomain -- ports [] Feb 1 04:42:09 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:09 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604209.localdomain"} : dispatch Feb 1 04:42:09 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604209.localdomain"} : dispatch Feb 1 04:42:09 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604209.localdomain"}]': finished Feb 1 04:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:42:09 localhost podman[283879]: 2026-02-01 09:42:09.867342305 +0000 UTC m=+0.079135921 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 1 04:42:09 localhost podman[283879]: 2026-02-01 09:42:09.876674628 +0000 UTC m=+0.088468244 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true) Feb 1 04:42:09 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:42:10 localhost ceph-mon[278949]: Removed host np0005604209.localdomain Feb 1 04:42:10 localhost ceph-mon[278949]: Removing key for client.crash.np0005604209.localdomain Feb 1 04:42:10 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth rm", "entity": "client.crash.np0005604209.localdomain"} : dispatch Feb 1 04:42:10 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth rm", "entity": "client.crash.np0005604209.localdomain"} : dispatch Feb 1 04:42:10 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005604209.localdomain"}]': finished Feb 1 04:42:10 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:10 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:10 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:42:10 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:10 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604210.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:10 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604210.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:12 localhost ceph-mon[278949]: Reconfiguring crash.np0005604210 (monmap changed)... Feb 1 04:42:12 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604210 on np0005604210.localdomain Feb 1 04:42:12 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:12 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:12 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:12 localhost ceph-mon[278949]: Reconfiguring mon.np0005604210 (monmap changed)... Feb 1 04:42:12 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:42:12 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604210 on np0005604210.localdomain Feb 1 04:42:13 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:13 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:13 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604210.rirrtk (monmap changed)... Feb 1 04:42:13 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604210.rirrtk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:13 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604210.rirrtk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:13 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604210.rirrtk on np0005604210.localdomain Feb 1 04:42:13 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:13 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:13 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:42:14 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:42:14 localhost ceph-mon[278949]: Reconfiguring mon.np0005604211 (monmap changed)... Feb 1 04:42:14 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604211 on np0005604211.localdomain Feb 1 04:42:14 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:14 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:14 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:14 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:15 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:42:15 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:42:15 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:15 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:15 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:15 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:16 localhost ceph-mon[278949]: Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:42:16 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:42:16 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:16 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:16 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:16 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:42:16 localhost podman[283934]: 2026-02-01 09:42:16.876028661 +0000 UTC m=+0.091085058 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:42:16 localhost podman[283934]: 2026-02-01 09:42:16.886718405 +0000 UTC m=+0.101774852 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:42:16 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:42:17 localhost ceph-mon[278949]: Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:42:17 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:42:17 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:17 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:17 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:42:17 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:18 localhost ceph-mon[278949]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:42:18 localhost ceph-mon[278949]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:42:18 localhost ceph-mon[278949]: Saving service mon spec with placement label:mon Feb 1 04:42:18 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:18 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:18 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:42:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:42:19 localhost ceph-mon[278949]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:42:19 localhost ceph-mon[278949]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:42:19 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:19 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:19 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:19 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:20 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c2f70f91e0 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Feb 1 04:42:20 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:42:20 localhost ceph-mon[278949]: paxos.2).electionLogic(34) init, last seen epoch 34 Feb 1 04:42:20 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:20 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:42:24 localhost podman[283958]: 2026-02-01 09:42:24.865845559 +0000 UTC m=+0.076240562 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:42:24 localhost podman[283958]: 2026-02-01 09:42:24.87672201 +0000 UTC m=+0.087116973 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:42:24 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:42:24 localhost podman[283957]: 2026-02-01 09:42:24.916990603 +0000 UTC m=+0.130247557 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:42:24 localhost podman[283957]: 2026-02-01 09:42:24.958008019 +0000 UTC m=+0.171264923 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible) Feb 1 04:42:24 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:42:25 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:25 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:25 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e8 handle_timecheck drop unexpected msg Feb 1 04:42:25 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:25 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:26 localhost ceph-mon[278949]: Remove daemons mon.np0005604212 Feb 1 04:42:26 localhost ceph-mon[278949]: Safe to remove mon.np0005604212: new quorum should be ['np0005604211', 'np0005604210', 'np0005604215', 'np0005604213'] (from ['np0005604211', 'np0005604210', 'np0005604215', 'np0005604213']) Feb 1 04:42:26 localhost ceph-mon[278949]: Removing monitor np0005604212 from monmap... Feb 1 04:42:26 localhost ceph-mon[278949]: Removing daemon mon.np0005604212 from np0005604212.localdomain -- ports [] Feb 1 04:42:26 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:42:26 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:26 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:42:26 localhost ceph-mon[278949]: mon.np0005604213 calling monitor election Feb 1 04:42:26 localhost ceph-mon[278949]: mon.np0005604210 calling monitor election Feb 1 04:42:26 localhost ceph-mon[278949]: Health check failed: 1/4 mons down, quorum np0005604211,np0005604215,np0005604213 (MON_DOWN) Feb 1 04:42:26 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:42:26 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:42:26 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:26 localhost ceph-mon[278949]: mon.np0005604211 is new leader, mons np0005604211,np0005604210,np0005604215,np0005604213 in quorum (ranks 0,1,2,3) Feb 1 04:42:26 localhost ceph-mon[278949]: from='mgr.14190 ' entity='' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:26 localhost ceph-mon[278949]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005604211,np0005604215,np0005604213) Feb 1 04:42:26 localhost ceph-mon[278949]: Cluster is now healthy Feb 1 04:42:26 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:42:26 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:42:26 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:27 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:27 localhost ceph-mon[278949]: Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:42:27 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:27 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:27 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:42:27 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:27 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:27 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:42:28 localhost ceph-mon[278949]: Reconfiguring osd.0 (monmap changed)... Feb 1 04:42:28 localhost ceph-mon[278949]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:42:28 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:28 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:28 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:42:29 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:42:29 localhost ceph-mon[278949]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:42:29 localhost ceph-mon[278949]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:42:29 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:29 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:29 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:29 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:30 localhost podman[236852]: time="2026-02-01T09:42:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:42:30 localhost podman[236852]: @ - - [01/Feb/2026:09:42:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:42:30 localhost podman[236852]: @ - - [01/Feb/2026:09:42:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17772 "" "Go-http-client/1.1" Feb 1 04:42:30 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:42:30 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:42:30 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:30 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:30 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:30 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:31 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:42:31 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:42:31 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:31 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:31 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:31 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:31 localhost openstack_network_exporter[239388]: ERROR 09:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:42:31 localhost openstack_network_exporter[239388]: Feb 1 04:42:31 localhost openstack_network_exporter[239388]: ERROR 09:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:42:31 localhost openstack_network_exporter[239388]: Feb 1 04:42:31 localhost podman[284057]: Feb 1 04:42:31 localhost podman[284057]: 2026-02-01 09:42:31.723707975 +0000 UTC m=+0.062231033 container create 0a58650c56f3caae4f60b33951f2267b45e7c9415d56a601068e1406c5c1e43f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_yonath, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:42:31 localhost systemd[1]: Started libpod-conmon-0a58650c56f3caae4f60b33951f2267b45e7c9415d56a601068e1406c5c1e43f.scope. Feb 1 04:42:31 localhost systemd[1]: Started libcrun container. Feb 1 04:42:31 localhost podman[284057]: 2026-02-01 09:42:31.690143162 +0000 UTC m=+0.028666270 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:31 localhost podman[284057]: 2026-02-01 09:42:31.791900003 +0000 UTC m=+0.130423071 container init 0a58650c56f3caae4f60b33951f2267b45e7c9415d56a601068e1406c5c1e43f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_yonath, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:42:31 localhost podman[284057]: 2026-02-01 09:42:31.802050042 +0000 UTC m=+0.140573100 container start 0a58650c56f3caae4f60b33951f2267b45e7c9415d56a601068e1406c5c1e43f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_yonath, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, release=1764794109, build-date=2025-12-08T17:28:53Z, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, version=7) Feb 1 04:42:31 localhost podman[284057]: 2026-02-01 09:42:31.802330071 +0000 UTC m=+0.140853199 container attach 0a58650c56f3caae4f60b33951f2267b45e7c9415d56a601068e1406c5c1e43f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_yonath, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:42:31 localhost gallant_yonath[284073]: 167 167 Feb 1 04:42:31 localhost systemd[1]: libpod-0a58650c56f3caae4f60b33951f2267b45e7c9415d56a601068e1406c5c1e43f.scope: Deactivated successfully. Feb 1 04:42:31 localhost podman[284057]: 2026-02-01 09:42:31.806068658 +0000 UTC m=+0.144591756 container died 0a58650c56f3caae4f60b33951f2267b45e7c9415d56a601068e1406c5c1e43f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_yonath, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.41.4, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:42:31 localhost podman[284078]: 2026-02-01 09:42:31.904263937 +0000 UTC m=+0.084359176 container remove 0a58650c56f3caae4f60b33951f2267b45e7c9415d56a601068e1406c5c1e43f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_yonath, distribution-scope=public, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, release=1764794109, maintainer=Guillaume Abrioux ) Feb 1 04:42:31 localhost systemd[1]: libpod-conmon-0a58650c56f3caae4f60b33951f2267b45e7c9415d56a601068e1406c5c1e43f.scope: Deactivated successfully. Feb 1 04:42:32 localhost ceph-mon[278949]: Reconfiguring crash.np0005604215 (monmap changed)... Feb 1 04:42:32 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604215 on np0005604215.localdomain Feb 1 04:42:32 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:32 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:32 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 1 04:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:42:32 localhost podman[284145]: 2026-02-01 09:42:32.626070836 +0000 UTC m=+0.082966434 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, release=1769056855, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter) Feb 1 04:42:32 localhost podman[284159]: Feb 1 04:42:32 localhost podman[284145]: 2026-02-01 09:42:32.663001314 +0000 UTC m=+0.119896892 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, release=1769056855, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.7, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, managed_by=edpm_ansible) Feb 1 04:42:32 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:42:32 localhost podman[284146]: 2026-02-01 09:42:32.682929509 +0000 UTC m=+0.139566359 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Feb 1 04:42:32 localhost podman[284159]: 2026-02-01 09:42:32.698957482 +0000 UTC m=+0.127173520 container create a3b117273bf954634b68ef03a30e30725a875b7024a1a065b5d191b3ece21920 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_chatterjee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7) Feb 1 04:42:32 localhost podman[284146]: 2026-02-01 09:42:32.717666569 +0000 UTC m=+0.174303469 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent) Feb 1 04:42:32 localhost podman[284159]: 2026-02-01 09:42:32.61921527 +0000 UTC m=+0.047431298 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:32 localhost systemd[1]: Started libpod-conmon-a3b117273bf954634b68ef03a30e30725a875b7024a1a065b5d191b3ece21920.scope. Feb 1 04:42:32 localhost systemd[1]: var-lib-containers-storage-overlay-abff02312b73c76c268b85fdd873e95c4cf7adfb7bdee37794fbb9c2a7e7d65d-merged.mount: Deactivated successfully. Feb 1 04:42:32 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:42:32 localhost systemd[1]: Started libcrun container. Feb 1 04:42:32 localhost podman[284159]: 2026-02-01 09:42:32.756512706 +0000 UTC m=+0.184728734 container init a3b117273bf954634b68ef03a30e30725a875b7024a1a065b5d191b3ece21920 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_chatterjee, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, vendor=Red Hat, Inc., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, build-date=2025-12-08T17:28:53Z, vcs-type=git, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:42:32 localhost podman[284159]: 2026-02-01 09:42:32.76648354 +0000 UTC m=+0.194699568 container start a3b117273bf954634b68ef03a30e30725a875b7024a1a065b5d191b3ece21920 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_chatterjee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109) Feb 1 04:42:32 localhost podman[284159]: 2026-02-01 09:42:32.766751618 +0000 UTC m=+0.194967686 container attach a3b117273bf954634b68ef03a30e30725a875b7024a1a065b5d191b3ece21920 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_chatterjee, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1764794109, build-date=2025-12-08T17:28:53Z, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:42:32 localhost infallible_chatterjee[284199]: 167 167 Feb 1 04:42:32 localhost systemd[1]: libpod-a3b117273bf954634b68ef03a30e30725a875b7024a1a065b5d191b3ece21920.scope: Deactivated successfully. Feb 1 04:42:32 localhost podman[284159]: 2026-02-01 09:42:32.769986719 +0000 UTC m=+0.198202827 container died a3b117273bf954634b68ef03a30e30725a875b7024a1a065b5d191b3ece21920 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_chatterjee, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, maintainer=Guillaume Abrioux , distribution-scope=public, CEPH_POINT_RELEASE=, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:42:32 localhost podman[284204]: 2026-02-01 09:42:32.868418656 +0000 UTC m=+0.085185462 container remove a3b117273bf954634b68ef03a30e30725a875b7024a1a065b5d191b3ece21920 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_chatterjee, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-type=git, name=rhceph, release=1764794109, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_CLEAN=True) Feb 1 04:42:32 localhost systemd[1]: libpod-conmon-a3b117273bf954634b68ef03a30e30725a875b7024a1a065b5d191b3ece21920.scope: Deactivated successfully. Feb 1 04:42:33 localhost ceph-mon[278949]: Reconfiguring osd.2 (monmap changed)... Feb 1 04:42:33 localhost ceph-mon[278949]: Reconfiguring daemon osd.2 on np0005604215.localdomain Feb 1 04:42:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:33 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:33 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 1 04:42:33 localhost podman[284281]: Feb 1 04:42:33 localhost podman[284281]: 2026-02-01 09:42:33.647621106 +0000 UTC m=+0.078553326 container create 6b98d98b8ce5f088db49f8c890e7d9ba7dc927375d2a32d0f9ca9c2af8bfadda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_aryabhata, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, vcs-type=git, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True) Feb 1 04:42:33 localhost systemd[1]: Started libpod-conmon-6b98d98b8ce5f088db49f8c890e7d9ba7dc927375d2a32d0f9ca9c2af8bfadda.scope. Feb 1 04:42:33 localhost systemd[1]: Started libcrun container. Feb 1 04:42:33 localhost podman[284281]: 2026-02-01 09:42:33.701844916 +0000 UTC m=+0.132777166 container init 6b98d98b8ce5f088db49f8c890e7d9ba7dc927375d2a32d0f9ca9c2af8bfadda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_aryabhata, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, name=rhceph, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, RELEASE=main, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:42:33 localhost podman[284281]: 2026-02-01 09:42:33.710837398 +0000 UTC m=+0.141769638 container start 6b98d98b8ce5f088db49f8c890e7d9ba7dc927375d2a32d0f9ca9c2af8bfadda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_aryabhata, vcs-type=git, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:42:33 localhost podman[284281]: 2026-02-01 09:42:33.711097486 +0000 UTC m=+0.142029736 container attach 6b98d98b8ce5f088db49f8c890e7d9ba7dc927375d2a32d0f9ca9c2af8bfadda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_aryabhata, maintainer=Guillaume Abrioux , release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, ceph=True, name=rhceph, vcs-type=git, version=7, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:42:33 localhost bold_aryabhata[284297]: 167 167 Feb 1 04:42:33 localhost systemd[1]: libpod-6b98d98b8ce5f088db49f8c890e7d9ba7dc927375d2a32d0f9ca9c2af8bfadda.scope: Deactivated successfully. Feb 1 04:42:33 localhost podman[284281]: 2026-02-01 09:42:33.714691518 +0000 UTC m=+0.145623788 container died 6b98d98b8ce5f088db49f8c890e7d9ba7dc927375d2a32d0f9ca9c2af8bfadda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_aryabhata, distribution-scope=public, name=rhceph, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:42:33 localhost podman[284281]: 2026-02-01 09:42:33.61624061 +0000 UTC m=+0.047172870 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:33 localhost systemd[1]: var-lib-containers-storage-overlay-03c0dbcb7a80c4978082c27b932267ebcdf02635aee3b5336b36a4179f48d69f-merged.mount: Deactivated successfully. Feb 1 04:42:33 localhost systemd[1]: var-lib-containers-storage-overlay-e645fb20f1e7d3ad5acc388d6d4eb7d292c55183c371648e6268ce78bc15f268-merged.mount: Deactivated successfully. Feb 1 04:42:33 localhost podman[284302]: 2026-02-01 09:42:33.817769461 +0000 UTC m=+0.089788557 container remove 6b98d98b8ce5f088db49f8c890e7d9ba7dc927375d2a32d0f9ca9c2af8bfadda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_aryabhata, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1764794109, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:42:33 localhost systemd[1]: libpod-conmon-6b98d98b8ce5f088db49f8c890e7d9ba7dc927375d2a32d0f9ca9c2af8bfadda.scope: Deactivated successfully. Feb 1 04:42:34 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:42:34 localhost podman[284378]: Feb 1 04:42:34 localhost ceph-mon[278949]: Reconfiguring osd.5 (monmap changed)... Feb 1 04:42:34 localhost ceph-mon[278949]: Reconfiguring daemon osd.5 on np0005604215.localdomain Feb 1 04:42:34 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:34 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:42:34 localhost ceph-mon[278949]: Deploying daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:42:34 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:34 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:34 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:34 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:34 localhost podman[284378]: 2026-02-01 09:42:34.620602651 +0000 UTC m=+0.075513389 container create b5fe18c4f0eac005dd389a509e7535e87844c167e874e32612960871781fd627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_archimedes, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , release=1764794109) Feb 1 04:42:34 localhost systemd[1]: Started libpod-conmon-b5fe18c4f0eac005dd389a509e7535e87844c167e874e32612960871781fd627.scope. Feb 1 04:42:34 localhost systemd[1]: Started libcrun container. Feb 1 04:42:34 localhost podman[284378]: 2026-02-01 09:42:34.589707302 +0000 UTC m=+0.044618090 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:34 localhost podman[284378]: 2026-02-01 09:42:34.694360234 +0000 UTC m=+0.149270972 container init b5fe18c4f0eac005dd389a509e7535e87844c167e874e32612960871781fd627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_archimedes, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, distribution-scope=public, name=rhceph, io.buildah.version=1.41.4, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, ceph=True, release=1764794109, vcs-type=git, architecture=x86_64, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:42:34 localhost podman[284378]: 2026-02-01 09:42:34.703395217 +0000 UTC m=+0.158305945 container start b5fe18c4f0eac005dd389a509e7535e87844c167e874e32612960871781fd627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_archimedes, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, ceph=True, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:42:34 localhost podman[284378]: 2026-02-01 09:42:34.703618904 +0000 UTC m=+0.158529642 container attach b5fe18c4f0eac005dd389a509e7535e87844c167e874e32612960871781fd627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_archimedes, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.41.4, RELEASE=main, version=7, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7) Feb 1 04:42:34 localhost nifty_archimedes[284393]: 167 167 Feb 1 04:42:34 localhost systemd[1]: libpod-b5fe18c4f0eac005dd389a509e7535e87844c167e874e32612960871781fd627.scope: Deactivated successfully. Feb 1 04:42:34 localhost podman[284378]: 2026-02-01 09:42:34.705770562 +0000 UTC m=+0.160681290 container died b5fe18c4f0eac005dd389a509e7535e87844c167e874e32612960871781fd627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_archimedes, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1764794109, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux ) Feb 1 04:42:34 localhost systemd[1]: var-lib-containers-storage-overlay-3ea4fa55cda2cc6bdb831e3ada3fdfb1b8276d0cbe4a3811218ce31280003806-merged.mount: Deactivated successfully. Feb 1 04:42:34 localhost podman[284398]: 2026-02-01 09:42:34.803109345 +0000 UTC m=+0.083254633 container remove b5fe18c4f0eac005dd389a509e7535e87844c167e874e32612960871781fd627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_archimedes, architecture=x86_64, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main) Feb 1 04:42:34 localhost systemd[1]: libpod-conmon-b5fe18c4f0eac005dd389a509e7535e87844c167e874e32612960871781fd627.scope: Deactivated successfully. Feb 1 04:42:35 localhost podman[284467]: Feb 1 04:42:35 localhost podman[284467]: 2026-02-01 09:42:35.528105433 +0000 UTC m=+0.069447619 container create 4f84bd405b091abd186f45113162ec731181873dd9ff1d6cbe8796fca860e4f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_poitras, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, release=1764794109, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:42:35 localhost systemd[1]: Started libpod-conmon-4f84bd405b091abd186f45113162ec731181873dd9ff1d6cbe8796fca860e4f0.scope. Feb 1 04:42:35 localhost systemd[1]: Started libcrun container. Feb 1 04:42:35 localhost podman[284467]: 2026-02-01 09:42:35.585807182 +0000 UTC m=+0.127149358 container init 4f84bd405b091abd186f45113162ec731181873dd9ff1d6cbe8796fca860e4f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_poitras, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=rhceph, build-date=2025-12-08T17:28:53Z, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc.) Feb 1 04:42:35 localhost podman[284467]: 2026-02-01 09:42:35.593978649 +0000 UTC m=+0.135320845 container start 4f84bd405b091abd186f45113162ec731181873dd9ff1d6cbe8796fca860e4f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_poitras, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, version=7, release=1764794109, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=) Feb 1 04:42:35 localhost podman[284467]: 2026-02-01 09:42:35.594273558 +0000 UTC m=+0.135615784 container attach 4f84bd405b091abd186f45113162ec731181873dd9ff1d6cbe8796fca860e4f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_poitras, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:42:35 localhost mystifying_poitras[284482]: 167 167 Feb 1 04:42:35 localhost systemd[1]: libpod-4f84bd405b091abd186f45113162ec731181873dd9ff1d6cbe8796fca860e4f0.scope: Deactivated successfully. Feb 1 04:42:35 localhost podman[284467]: 2026-02-01 09:42:35.597169859 +0000 UTC m=+0.138512075 container died 4f84bd405b091abd186f45113162ec731181873dd9ff1d6cbe8796fca860e4f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_poitras, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:42:35 localhost podman[284467]: 2026-02-01 09:42:35.502846031 +0000 UTC m=+0.044188237 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:42:35 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:42:35 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:42:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:35 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:35 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:35 localhost podman[284487]: 2026-02-01 09:42:35.69414036 +0000 UTC m=+0.084167000 container remove 4f84bd405b091abd186f45113162ec731181873dd9ff1d6cbe8796fca860e4f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_poitras, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., version=7, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=1764794109, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=) Feb 1 04:42:35 localhost systemd[1]: libpod-conmon-4f84bd405b091abd186f45113162ec731181873dd9ff1d6cbe8796fca860e4f0.scope: Deactivated successfully. Feb 1 04:42:35 localhost systemd[1]: var-lib-containers-storage-overlay-dde2bd4f4d3dc4fc457a25ebfe4f99249ce6b76a09e50c621d51165220efe7eb-merged.mount: Deactivated successfully. Feb 1 04:42:36 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Feb 1 04:42:36 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Feb 1 04:42:36 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Feb 1 04:42:36 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c2f70f9600 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Feb 1 04:42:36 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:42:36 localhost ceph-mon[278949]: paxos.2).electionLogic(40) init, last seen epoch 40 Feb 1 04:42:36 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:36 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:37 localhost nova_compute[274317]: 2026-02-01 09:42:37.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:42:37 localhost nova_compute[274317]: 2026-02-01 09:42:37.103 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:42:37 localhost nova_compute[274317]: 2026-02-01 09:42:37.103 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:42:37 localhost nova_compute[274317]: 2026-02-01 09:42:37.118 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:42:38 localhost nova_compute[274317]: 2026-02-01 09:42:38.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:42:39 localhost nova_compute[274317]: 2026-02-01 09:42:39.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:42:39 localhost nova_compute[274317]: 2026-02-01 09:42:39.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:42:39 localhost nova_compute[274317]: 2026-02-01 09:42:39.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:42:40 localhost nova_compute[274317]: 2026-02-01 09:42:40.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:42:40 localhost podman[284572]: 2026-02-01 09:42:40.87149436 +0000 UTC m=+0.083649925 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 1 04:42:40 localhost podman[284572]: 2026-02-01 09:42:40.880039348 +0000 UTC m=+0.092194923 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 1 04:42:40 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:42:41 localhost nova_compute[274317]: 2026-02-01 09:42:41.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:42:41 localhost nova_compute[274317]: 2026-02-01 09:42:41.127 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:42:41 localhost nova_compute[274317]: 2026-02-01 09:42:41.127 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:42:41 localhost nova_compute[274317]: 2026-02-01 09:42:41.128 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:42:41 localhost nova_compute[274317]: 2026-02-01 09:42:41.128 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:42:41 localhost nova_compute[274317]: 2026-02-01 09:42:41.128 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:42:41 localhost ceph-mon[278949]: mon.np0005604215@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:41 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:42:41 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:42:41 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:42:41 localhost ceph-mon[278949]: mon.np0005604213 calling monitor election Feb 1 04:42:41 localhost ceph-mon[278949]: mon.np0005604210 calling monitor election Feb 1 04:42:41 localhost ceph-mon[278949]: mon.np0005604212 calling monitor election Feb 1 04:42:41 localhost ceph-mon[278949]: mon.np0005604211 is new leader, mons np0005604211,np0005604210,np0005604215,np0005604213,np0005604212 in quorum (ranks 0,1,2,3,4) Feb 1 04:42:41 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:42:41 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:41 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:42:41.759 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:42:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:42:41.761 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:42:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:42:41.761 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:42:41 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:42:41 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1268025690' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:42:41 localhost nova_compute[274317]: 2026-02-01 09:42:41.791 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.002 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.003 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12411MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.004 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.004 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.403 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.404 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.433 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:42:42 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.870 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.876 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.896 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.899 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:42:42 localhost nova_compute[274317]: 2026-02-01 09:42:42.899 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:42:43 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:43 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:43 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:43 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:43 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:43 localhost nova_compute[274317]: 2026-02-01 09:42:43.895 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:42:43 localhost nova_compute[274317]: 2026-02-01 09:42:43.895 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:42:44 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:42:44 localhost nova_compute[274317]: 2026-02-01 09:42:44.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:42:44 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:44 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:44 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:44 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:44 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604210.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:44 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604210.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:45 localhost ceph-mon[278949]: Reconfiguring crash.np0005604210 (monmap changed)... Feb 1 04:42:45 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604210 on np0005604210.localdomain Feb 1 04:42:45 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:45 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:45 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604210.rirrtk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:45 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604210.rirrtk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:46 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604210.rirrtk (monmap changed)... Feb 1 04:42:46 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604210.rirrtk on np0005604210.localdomain Feb 1 04:42:46 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:46 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:46 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:46 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:46 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:42:47 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Feb 1 04:42:47 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1047655474' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Feb 1 04:42:47 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:42:47 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:42:47 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:47 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:47 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:47 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:42:47 localhost podman[284973]: 2026-02-01 09:42:47.862871293 +0000 UTC m=+0.079074001 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:42:47 localhost podman[284973]: 2026-02-01 09:42:47.896711744 +0000 UTC m=+0.112914442 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:42:47 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:42:48 localhost ceph-mon[278949]: Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:42:48 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:42:48 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:48 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:48 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:48 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:42:49 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:42:49 localhost ceph-mon[278949]: Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:42:49 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:49 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:50 localhost ceph-mon[278949]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:42:50 localhost ceph-mon[278949]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:42:50 localhost ceph-mon[278949]: Reconfig service osd.default_drive_group Feb 1 04:42:50 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:50 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:50 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:50 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:50 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:42:50 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:50 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:50 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:50 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e83 e83: 6 total, 6 up, 6 in Feb 1 04:42:51 localhost systemd[1]: session-64.scope: Deactivated successfully. Feb 1 04:42:51 localhost systemd[1]: session-64.scope: Consumed 18.755s CPU time. Feb 1 04:42:51 localhost systemd-logind[761]: Session 64 logged out. Waiting for processes to exit. Feb 1 04:42:51 localhost systemd-logind[761]: Removed session 64. Feb 1 04:42:51 localhost sshd[284996]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:42:51 localhost systemd-logind[761]: New session 65 of user ceph-admin. Feb 1 04:42:51 localhost systemd[1]: Started Session 65 of User ceph-admin. Feb 1 04:42:51 localhost ceph-mon[278949]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:42:51 localhost ceph-mon[278949]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.14190 172.18.0.105:0/4078248779' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.14190 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='client.? 172.18.0.200:0/1066355409' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: Activating manager daemon np0005604213.caiaeh Feb 1 04:42:51 localhost ceph-mon[278949]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 1 04:42:51 localhost ceph-mon[278949]: Manager daemon np0005604213.caiaeh is now available Feb 1 04:42:51 localhost ceph-mon[278949]: removing stray HostCache host record np0005604209.localdomain.devices.0 Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604209.localdomain.devices.0"} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604209.localdomain.devices.0"} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604209.localdomain.devices.0"}]': finished Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604209.localdomain.devices.0"} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604209.localdomain.devices.0"} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604209.localdomain.devices.0"}]': finished Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604213.caiaeh/mirror_snapshot_schedule"} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604213.caiaeh/mirror_snapshot_schedule"} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604213.caiaeh/trash_purge_schedule"} : dispatch Feb 1 04:42:51 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604213.caiaeh/trash_purge_schedule"} : dispatch Feb 1 04:42:52 localhost podman[285111]: 2026-02-01 09:42:52.507225474 +0000 UTC m=+0.085714279 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, release=1764794109, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:42:52 localhost podman[285111]: 2026-02-01 09:42:52.606859659 +0000 UTC m=+0.185348464 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, name=rhceph, GIT_BRANCH=main, release=1764794109, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True) Feb 1 04:42:53 localhost ceph-mon[278949]: [01/Feb/2026:09:42:52] ENGINE Bus STARTING Feb 1 04:42:53 localhost ceph-mon[278949]: [01/Feb/2026:09:42:52] ENGINE Serving on http://172.18.0.107:8765 Feb 1 04:42:53 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: [01/Feb/2026:09:42:52] ENGINE Serving on https://172.18.0.107:7150 Feb 1 04:42:54 localhost ceph-mon[278949]: [01/Feb/2026:09:42:52] ENGINE Bus STARTED Feb 1 04:42:54 localhost ceph-mon[278949]: [01/Feb/2026:09:42:52] ENGINE Client ('172.18.0.107', 41850) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:42:54 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:54 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:42:55 localhost podman[285373]: 2026-02-01 09:42:55.601419239 +0000 UTC m=+0.090553861 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller) Feb 1 04:42:55 localhost podman[285373]: 2026-02-01 09:42:55.638036557 +0000 UTC m=+0.127171239 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller) Feb 1 04:42:55 localhost podman[285374]: 2026-02-01 09:42:55.650754557 +0000 UTC m=+0.139919420 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:42:55 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:42:55 localhost podman[285374]: 2026-02-01 09:42:55.662708961 +0000 UTC m=+0.151873824 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:42:55 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd/host:np0005604210", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd/host:np0005604210", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:42:56 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:42:56 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:42:56 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:42:56 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:42:56 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:42:56 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:56 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:42:57 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:42:57 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:42:57 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:42:57 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:42:57 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:42:59 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:42:59 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:42:59 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:42:59 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:42:59 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:59 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:42:59 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:00 localhost podman[236852]: time="2026-02-01T09:43:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:43:00 localhost podman[236852]: @ - - [01/Feb/2026:09:43:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:43:00 localhost podman[236852]: @ - - [01/Feb/2026:09:43:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17776 "" "Go-http-client/1.1" Feb 1 04:43:00 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:43:00 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:43:00 localhost ceph-mon[278949]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 1 04:43:00 localhost ceph-mon[278949]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 1 04:43:00 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:00 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:00 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:43:00 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:43:01 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:43:01 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:43:01 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:01 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:01 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:43:01 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:43:01 localhost openstack_network_exporter[239388]: ERROR 09:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:43:01 localhost openstack_network_exporter[239388]: Feb 1 04:43:01 localhost openstack_network_exporter[239388]: ERROR 09:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:43:01 localhost openstack_network_exporter[239388]: Feb 1 04:43:02 localhost ceph-mon[278949]: Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:43:02 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:43:02 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:02 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:02 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:02 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:43:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:43:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:43:02 localhost podman[286062]: 2026-02-01 09:43:02.892992348 +0000 UTC m=+0.073381542 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, release=1769056855) Feb 1 04:43:02 localhost podman[286062]: 2026-02-01 09:43:02.904835539 +0000 UTC m=+0.085224733 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-22T05:09:47Z, release=1769056855, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, version=9.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=openstack_network_exporter) Feb 1 04:43:02 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:43:02 localhost podman[286063]: 2026-02-01 09:43:02.957256484 +0000 UTC m=+0.130183924 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Feb 1 04:43:02 localhost podman[286063]: 2026-02-01 09:43:02.966619917 +0000 UTC m=+0.139547337 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible) Feb 1 04:43:02 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:43:03 localhost ceph-mon[278949]: Reconfiguring osd.0 (monmap changed)... Feb 1 04:43:03 localhost ceph-mon[278949]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:43:03 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:03 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:03 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:03 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:03 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.404 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:43:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:43:04 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:04 localhost ceph-mon[278949]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:43:04 localhost ceph-mon[278949]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:43:04 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:04 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:04 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:04 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:04 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:43:04 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:43:05 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:43:05 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:43:05 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:05 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:05 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:43:05 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:43:05 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:43:05 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:43:06 localhost ceph-mon[278949]: Saving service mon spec with placement label:mon Feb 1 04:43:06 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:06 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:06 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:06 localhost ceph-mon[278949]: Reconfiguring mon.np0005604213 (monmap changed)... Feb 1 04:43:06 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:43:06 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:43:06 localhost podman[286153]: Feb 1 04:43:06 localhost podman[286153]: 2026-02-01 09:43:06.885642561 +0000 UTC m=+0.068227071 container create 52d3777649e7139cc129c0eef708b23b57fd6ddf5ca55ad00d63de305b18da3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_ritchie, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, release=1764794109, GIT_BRANCH=main, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, io.openshift.tags=rhceph ceph) Feb 1 04:43:06 localhost systemd[1]: Started libpod-conmon-52d3777649e7139cc129c0eef708b23b57fd6ddf5ca55ad00d63de305b18da3d.scope. Feb 1 04:43:06 localhost systemd[1]: Started libcrun container. Feb 1 04:43:06 localhost podman[286153]: 2026-02-01 09:43:06.951435985 +0000 UTC m=+0.134020405 container init 52d3777649e7139cc129c0eef708b23b57fd6ddf5ca55ad00d63de305b18da3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_ritchie, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, release=1764794109, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:43:06 localhost podman[286153]: 2026-02-01 09:43:06.859896363 +0000 UTC m=+0.042480813 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:43:06 localhost podman[286153]: 2026-02-01 09:43:06.962418969 +0000 UTC m=+0.145003399 container start 52d3777649e7139cc129c0eef708b23b57fd6ddf5ca55ad00d63de305b18da3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_ritchie, release=1764794109, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, CEPH_POINT_RELEASE=) Feb 1 04:43:06 localhost podman[286153]: 2026-02-01 09:43:06.962687947 +0000 UTC m=+0.145272387 container attach 52d3777649e7139cc129c0eef708b23b57fd6ddf5ca55ad00d63de305b18da3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_ritchie, version=7, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, ceph=True, release=1764794109, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:43:06 localhost interesting_ritchie[286168]: 167 167 Feb 1 04:43:06 localhost systemd[1]: libpod-52d3777649e7139cc129c0eef708b23b57fd6ddf5ca55ad00d63de305b18da3d.scope: Deactivated successfully. Feb 1 04:43:06 localhost podman[286153]: 2026-02-01 09:43:06.966032342 +0000 UTC m=+0.148616782 container died 52d3777649e7139cc129c0eef708b23b57fd6ddf5ca55ad00d63de305b18da3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_ritchie, version=7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64) Feb 1 04:43:07 localhost podman[286173]: 2026-02-01 09:43:07.058646897 +0000 UTC m=+0.083693156 container remove 52d3777649e7139cc129c0eef708b23b57fd6ddf5ca55ad00d63de305b18da3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_ritchie, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, release=1764794109, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7) Feb 1 04:43:07 localhost systemd[1]: libpod-conmon-52d3777649e7139cc129c0eef708b23b57fd6ddf5ca55ad00d63de305b18da3d.scope: Deactivated successfully. Feb 1 04:43:07 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:07 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:07 localhost ceph-mon[278949]: Reconfiguring crash.np0005604215 (monmap changed)... Feb 1 04:43:07 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:43:07 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:43:07 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604215 on np0005604215.localdomain Feb 1 04:43:07 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:07 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:07 localhost ceph-mon[278949]: Reconfiguring osd.2 (monmap changed)... Feb 1 04:43:07 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 1 04:43:07 localhost ceph-mon[278949]: Reconfiguring daemon osd.2 on np0005604215.localdomain Feb 1 04:43:07 localhost podman[286243]: Feb 1 04:43:07 localhost podman[286243]: 2026-02-01 09:43:07.746610244 +0000 UTC m=+0.076882103 container create b628758ffcc4605feb9996548cd4ed4e845742d5613ec37e4b24e77c028b605b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_ellis, ceph=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109) Feb 1 04:43:07 localhost systemd[1]: Started libpod-conmon-b628758ffcc4605feb9996548cd4ed4e845742d5613ec37e4b24e77c028b605b.scope. Feb 1 04:43:07 localhost systemd[1]: Started libcrun container. Feb 1 04:43:07 localhost podman[286243]: 2026-02-01 09:43:07.811088096 +0000 UTC m=+0.141359995 container init b628758ffcc4605feb9996548cd4ed4e845742d5613ec37e4b24e77c028b605b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_ellis, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, RELEASE=main, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109) Feb 1 04:43:07 localhost podman[286243]: 2026-02-01 09:43:07.715839799 +0000 UTC m=+0.046111698 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:43:07 localhost podman[286243]: 2026-02-01 09:43:07.822532875 +0000 UTC m=+0.152804724 container start b628758ffcc4605feb9996548cd4ed4e845742d5613ec37e4b24e77c028b605b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_ellis, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.41.4, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, distribution-scope=public) Feb 1 04:43:07 localhost podman[286243]: 2026-02-01 09:43:07.822941848 +0000 UTC m=+0.153213697 container attach b628758ffcc4605feb9996548cd4ed4e845742d5613ec37e4b24e77c028b605b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_ellis, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:43:07 localhost stupefied_ellis[286258]: 167 167 Feb 1 04:43:07 localhost systemd[1]: libpod-b628758ffcc4605feb9996548cd4ed4e845742d5613ec37e4b24e77c028b605b.scope: Deactivated successfully. Feb 1 04:43:07 localhost podman[286243]: 2026-02-01 09:43:07.826486529 +0000 UTC m=+0.156758398 container died b628758ffcc4605feb9996548cd4ed4e845742d5613ec37e4b24e77c028b605b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_ellis, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux , version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vendor=Red Hat, Inc., release=1764794109, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:43:07 localhost systemd[1]: var-lib-containers-storage-overlay-cf8cf44bb348c9ea2d8dcfa3b45cd020c7f772e2941b4f1ee2e3a46865527e17-merged.mount: Deactivated successfully. Feb 1 04:43:07 localhost systemd[1]: var-lib-containers-storage-overlay-3f447bde679b1d24d5382b25f8629a43d466226929c54cdaf02cf53c37e2c0d5-merged.mount: Deactivated successfully. Feb 1 04:43:07 localhost podman[286263]: 2026-02-01 09:43:07.930275474 +0000 UTC m=+0.090725846 container remove b628758ffcc4605feb9996548cd4ed4e845742d5613ec37e4b24e77c028b605b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_ellis, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:43:07 localhost systemd[1]: libpod-conmon-b628758ffcc4605feb9996548cd4ed4e845742d5613ec37e4b24e77c028b605b.scope: Deactivated successfully. Feb 1 04:43:08 localhost podman[286338]: Feb 1 04:43:08 localhost podman[286338]: 2026-02-01 09:43:08.803496261 +0000 UTC m=+0.075326813 container create 087efad5aff32d603da4573c076b8607d80c9ecb109fa0cf1a2ac0a967141798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wu, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , ceph=True, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git) Feb 1 04:43:08 localhost systemd[1]: Started libpod-conmon-087efad5aff32d603da4573c076b8607d80c9ecb109fa0cf1a2ac0a967141798.scope. Feb 1 04:43:08 localhost systemd[1]: Started libcrun container. Feb 1 04:43:08 localhost podman[286338]: 2026-02-01 09:43:08.87169346 +0000 UTC m=+0.143524012 container init 087efad5aff32d603da4573c076b8607d80c9ecb109fa0cf1a2ac0a967141798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wu, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, architecture=x86_64, version=7, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:43:08 localhost podman[286338]: 2026-02-01 09:43:08.773847591 +0000 UTC m=+0.045678173 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:43:08 localhost podman[286338]: 2026-02-01 09:43:08.88125852 +0000 UTC m=+0.153089082 container start 087efad5aff32d603da4573c076b8607d80c9ecb109fa0cf1a2ac0a967141798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wu, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., ceph=True, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public) Feb 1 04:43:08 localhost podman[286338]: 2026-02-01 09:43:08.881546639 +0000 UTC m=+0.153377241 container attach 087efad5aff32d603da4573c076b8607d80c9ecb109fa0cf1a2ac0a967141798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wu, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1764794109, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, ceph=True) Feb 1 04:43:08 localhost boring_wu[286353]: 167 167 Feb 1 04:43:08 localhost systemd[1]: libpod-087efad5aff32d603da4573c076b8607d80c9ecb109fa0cf1a2ac0a967141798.scope: Deactivated successfully. Feb 1 04:43:08 localhost podman[286338]: 2026-02-01 09:43:08.884017117 +0000 UTC m=+0.155847689 container died 087efad5aff32d603da4573c076b8607d80c9ecb109fa0cf1a2ac0a967141798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wu, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux , name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:43:08 localhost systemd[1]: tmp-crun.nWjMYG.mount: Deactivated successfully. Feb 1 04:43:08 localhost systemd[1]: var-lib-containers-storage-overlay-3c2ac1cd2b5dc283264ce2a72a7b16b21161e1cb1490c31004d3b1894bf73256-merged.mount: Deactivated successfully. Feb 1 04:43:08 localhost podman[286359]: 2026-02-01 09:43:08.993973625 +0000 UTC m=+0.097008434 container remove 087efad5aff32d603da4573c076b8607d80c9ecb109fa0cf1a2ac0a967141798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_wu, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:43:08 localhost systemd[1]: libpod-conmon-087efad5aff32d603da4573c076b8607d80c9ecb109fa0cf1a2ac0a967141798.scope: Deactivated successfully. Feb 1 04:43:09 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:09 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:09 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:09 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:09 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:09 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 1 04:43:09 localhost podman[286435]: Feb 1 04:43:09 localhost podman[286435]: 2026-02-01 09:43:09.905485823 +0000 UTC m=+0.075648804 container create 91f505ad3d954de0519d85aa866a829925952846448c010a7436941ab9b26713 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dewdney, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, release=1764794109, ceph=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z) Feb 1 04:43:09 localhost systemd[1]: Started libpod-conmon-91f505ad3d954de0519d85aa866a829925952846448c010a7436941ab9b26713.scope. Feb 1 04:43:09 localhost systemd[1]: Started libcrun container. Feb 1 04:43:09 localhost podman[286435]: 2026-02-01 09:43:09.973192206 +0000 UTC m=+0.143355187 container init 91f505ad3d954de0519d85aa866a829925952846448c010a7436941ab9b26713 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dewdney, distribution-scope=public, release=1764794109, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Feb 1 04:43:09 localhost podman[286435]: 2026-02-01 09:43:09.874964485 +0000 UTC m=+0.045127466 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:43:09 localhost podman[286435]: 2026-02-01 09:43:09.981854419 +0000 UTC m=+0.152017390 container start 91f505ad3d954de0519d85aa866a829925952846448c010a7436941ab9b26713 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dewdney, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, vcs-type=git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.buildah.version=1.41.4, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z) Feb 1 04:43:09 localhost podman[286435]: 2026-02-01 09:43:09.982123577 +0000 UTC m=+0.152286558 container attach 91f505ad3d954de0519d85aa866a829925952846448c010a7436941ab9b26713 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dewdney, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, io.buildah.version=1.41.4, distribution-scope=public, release=1764794109, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main) Feb 1 04:43:09 localhost determined_dewdney[286450]: 167 167 Feb 1 04:43:09 localhost systemd[1]: libpod-91f505ad3d954de0519d85aa866a829925952846448c010a7436941ab9b26713.scope: Deactivated successfully. Feb 1 04:43:09 localhost podman[286435]: 2026-02-01 09:43:09.985196903 +0000 UTC m=+0.155359894 container died 91f505ad3d954de0519d85aa866a829925952846448c010a7436941ab9b26713 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dewdney, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7) Feb 1 04:43:10 localhost podman[286455]: 2026-02-01 09:43:10.078969494 +0000 UTC m=+0.084540703 container remove 91f505ad3d954de0519d85aa866a829925952846448c010a7436941ab9b26713 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dewdney, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph) Feb 1 04:43:10 localhost systemd[1]: libpod-conmon-91f505ad3d954de0519d85aa866a829925952846448c010a7436941ab9b26713.scope: Deactivated successfully. Feb 1 04:43:10 localhost ceph-mon[278949]: Reconfiguring osd.5 (monmap changed)... Feb 1 04:43:10 localhost ceph-mon[278949]: Reconfiguring daemon osd.5 on np0005604215.localdomain Feb 1 04:43:10 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:10 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:10 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:10 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:10 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:43:10 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:43:10 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:10 localhost podman[286526]: Feb 1 04:43:10 localhost podman[286526]: 2026-02-01 09:43:10.814593136 +0000 UTC m=+0.079087681 container create c899daceff4522ed65a81a07fb017999c8c8078003d3eebefeba239443b68e04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_taussig, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4) Feb 1 04:43:10 localhost systemd[1]: Started libpod-conmon-c899daceff4522ed65a81a07fb017999c8c8078003d3eebefeba239443b68e04.scope. Feb 1 04:43:10 localhost systemd[1]: Started libcrun container. Feb 1 04:43:10 localhost podman[286526]: 2026-02-01 09:43:10.876812927 +0000 UTC m=+0.141307442 container init c899daceff4522ed65a81a07fb017999c8c8078003d3eebefeba239443b68e04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_taussig, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, ceph=True, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64) Feb 1 04:43:10 localhost podman[286526]: 2026-02-01 09:43:10.7841045 +0000 UTC m=+0.048599045 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:43:10 localhost podman[286526]: 2026-02-01 09:43:10.885868571 +0000 UTC m=+0.150363116 container start c899daceff4522ed65a81a07fb017999c8c8078003d3eebefeba239443b68e04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_taussig, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, RELEASE=main, vendor=Red Hat, Inc., release=1764794109, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, ceph=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, version=7) Feb 1 04:43:10 localhost relaxed_taussig[286541]: 167 167 Feb 1 04:43:10 localhost podman[286526]: 2026-02-01 09:43:10.886281474 +0000 UTC m=+0.150776009 container attach c899daceff4522ed65a81a07fb017999c8c8078003d3eebefeba239443b68e04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_taussig, version=7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, release=1764794109, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph) Feb 1 04:43:10 localhost systemd[1]: libpod-c899daceff4522ed65a81a07fb017999c8c8078003d3eebefeba239443b68e04.scope: Deactivated successfully. Feb 1 04:43:10 localhost podman[286526]: 2026-02-01 09:43:10.8896485 +0000 UTC m=+0.154143035 container died c899daceff4522ed65a81a07fb017999c8c8078003d3eebefeba239443b68e04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_taussig, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, release=1764794109, vcs-type=git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:43:10 localhost systemd[1]: var-lib-containers-storage-overlay-333cdf6ed04606b2443d6c45c9aaacf06093307685a9232066d175831129dab4-merged.mount: Deactivated successfully. Feb 1 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:43:10 localhost systemd[1]: var-lib-containers-storage-overlay-8f2c777c9ebb8086f65143a5e908dc6786b140d5db04925da3e8b2bef752823b-merged.mount: Deactivated successfully. Feb 1 04:43:11 localhost podman[286547]: 2026-02-01 09:43:11.00442976 +0000 UTC m=+0.101799254 container remove c899daceff4522ed65a81a07fb017999c8c8078003d3eebefeba239443b68e04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_taussig, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=1764794109, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64) Feb 1 04:43:11 localhost systemd[1]: libpod-conmon-c899daceff4522ed65a81a07fb017999c8c8078003d3eebefeba239443b68e04.scope: Deactivated successfully. Feb 1 04:43:11 localhost podman[286556]: 2026-02-01 09:43:11.087220176 +0000 UTC m=+0.152692419 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute) Feb 1 04:43:11 localhost podman[286556]: 2026-02-01 09:43:11.100998908 +0000 UTC m=+0.166471201 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:43:11 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:43:11 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:43:11 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:43:11 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:11 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604215.uhhqtv (monmap changed)... Feb 1 04:43:11 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:43:11 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:43:11 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604215.uhhqtv on np0005604215.localdomain Feb 1 04:43:11 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:11 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:11 localhost ceph-mon[278949]: Reconfiguring mon.np0005604215 (monmap changed)... Feb 1 04:43:11 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:43:11 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604215 on np0005604215.localdomain Feb 1 04:43:11 localhost podman[286637]: Feb 1 04:43:11 localhost podman[286637]: 2026-02-01 09:43:11.669443937 +0000 UTC m=+0.045548980 container create 5a2b9b6dabb51b23c433dd4ad69153481cc77923b404ca4f90b8962c9bcab9b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_swartz, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:43:11 localhost systemd[1]: Started libpod-conmon-5a2b9b6dabb51b23c433dd4ad69153481cc77923b404ca4f90b8962c9bcab9b5.scope. Feb 1 04:43:11 localhost systemd[1]: Started libcrun container. Feb 1 04:43:11 localhost podman[286637]: 2026-02-01 09:43:11.719460906 +0000 UTC m=+0.095565969 container init 5a2b9b6dabb51b23c433dd4ad69153481cc77923b404ca4f90b8962c9bcab9b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_swartz, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, name=rhceph, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, distribution-scope=public, vendor=Red Hat, Inc., version=7) Feb 1 04:43:11 localhost podman[286637]: 2026-02-01 09:43:11.726739043 +0000 UTC m=+0.102844106 container start 5a2b9b6dabb51b23c433dd4ad69153481cc77923b404ca4f90b8962c9bcab9b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_swartz, release=1764794109, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z) Feb 1 04:43:11 localhost podman[286637]: 2026-02-01 09:43:11.727369294 +0000 UTC m=+0.103474357 container attach 5a2b9b6dabb51b23c433dd4ad69153481cc77923b404ca4f90b8962c9bcab9b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_swartz, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1764794109, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, version=7, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:43:11 localhost unruffled_swartz[286652]: 167 167 Feb 1 04:43:11 localhost systemd[1]: libpod-5a2b9b6dabb51b23c433dd4ad69153481cc77923b404ca4f90b8962c9bcab9b5.scope: Deactivated successfully. Feb 1 04:43:11 localhost podman[286637]: 2026-02-01 09:43:11.730035767 +0000 UTC m=+0.106140830 container died 5a2b9b6dabb51b23c433dd4ad69153481cc77923b404ca4f90b8962c9bcab9b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_swartz, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux ) Feb 1 04:43:11 localhost podman[286637]: 2026-02-01 09:43:11.65200214 +0000 UTC m=+0.028107183 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:43:11 localhost podman[286658]: 2026-02-01 09:43:11.819635587 +0000 UTC m=+0.081891050 container remove 5a2b9b6dabb51b23c433dd4ad69153481cc77923b404ca4f90b8962c9bcab9b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_swartz, io.openshift.expose-services=, release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container) Feb 1 04:43:11 localhost systemd[1]: libpod-conmon-5a2b9b6dabb51b23c433dd4ad69153481cc77923b404ca4f90b8962c9bcab9b5.scope: Deactivated successfully. Feb 1 04:43:13 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:13 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:13 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:43:13 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:13 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:13 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:43:14 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:14 localhost ceph-mon[278949]: Reconfiguring mon.np0005604210 (monmap changed)... Feb 1 04:43:14 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604210 on np0005604210.localdomain Feb 1 04:43:14 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:14 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:14 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:43:14 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 e84: 6 total, 6 up, 6 in Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604210"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata", "id": "np0005604210"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604211"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata", "id": "np0005604211"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604212"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata", "id": "np0005604212"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604215"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata", "id": "np0005604215"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604215.rwvxvg"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mds metadata", "who": "mds.np0005604215.rwvxvg"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604213.jdbvyh"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mds metadata", "who": "mds.np0005604213.jdbvyh"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604212.tkdkxt"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mds metadata", "who": "mds.np0005604212.tkdkxt"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon).mds e16 all = 0 Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon).mds e16 all = 0 Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon).mds e16 all = 0 Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604209.isqrps", "id": "np0005604209.isqrps"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604209.isqrps", "id": "np0005604209.isqrps"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604210.rirrtk", "id": "np0005604210.rirrtk"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604210.rirrtk", "id": "np0005604210.rirrtk"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604212.oynhpm", "id": "np0005604212.oynhpm"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604212.oynhpm", "id": "np0005604212.oynhpm"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604215.uhhqtv", "id": "np0005604215.uhhqtv"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604215.uhhqtv", "id": "np0005604215.uhhqtv"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604211.cuflqz", "id": "np0005604211.cuflqz"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604211.cuflqz", "id": "np0005604211.cuflqz"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mds metadata"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: Reconfiguring mon.np0005604211 (monmap changed)... Feb 1 04:43:15 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604211 on np0005604211.localdomain Feb 1 04:43:15 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:15 localhost ceph-mon[278949]: Reconfiguring mon.np0005604212 (monmap changed)... Feb 1 04:43:15 localhost ceph-mon[278949]: from='mgr.17070 172.18.0.107:0/3506477835' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:43:15 localhost ceph-mon[278949]: from='client.? 172.18.0.200:0/474945783' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: Activating manager daemon np0005604209.isqrps Feb 1 04:43:15 localhost ceph-mon[278949]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 1 04:43:15 localhost ceph-mon[278949]: from='mgr.17070 ' entity='mgr.np0005604213.caiaeh' Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon).mds e16 all = 1 Feb 1 04:43:15 localhost systemd[1]: session-65.scope: Deactivated successfully. Feb 1 04:43:15 localhost systemd[1]: session-65.scope: Consumed 10.529s CPU time. Feb 1 04:43:15 localhost systemd-logind[761]: Session 65 logged out. Waiting for processes to exit. Feb 1 04:43:15 localhost systemd-logind[761]: Removed session 65. Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/mirror_snapshot_schedule"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/mirror_snapshot_schedule"} : dispatch Feb 1 04:43:15 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/trash_purge_schedule"} v 0) Feb 1 04:43:15 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/trash_purge_schedule"} : dispatch Feb 1 04:43:15 localhost sshd[286693]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:43:15 localhost systemd-logind[761]: New session 66 of user ceph-admin. Feb 1 04:43:15 localhost systemd[1]: Started Session 66 of User ceph-admin. Feb 1 04:43:16 localhost ceph-mon[278949]: Manager daemon np0005604209.isqrps is now available Feb 1 04:43:16 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/mirror_snapshot_schedule"} : dispatch Feb 1 04:43:16 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/mirror_snapshot_schedule"} : dispatch Feb 1 04:43:16 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/trash_purge_schedule"} : dispatch Feb 1 04:43:16 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/trash_purge_schedule"} : dispatch Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.302878) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938996302917, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 6152, "num_deletes": 759, "total_data_size": 20046824, "memory_usage": 21016520, "flush_reason": "Manual Compaction"} Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938996372553, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 11726491, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9175, "largest_seqno": 15322, "table_properties": {"data_size": 11703142, "index_size": 14734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7557, "raw_key_size": 66356, "raw_average_key_size": 22, "raw_value_size": 11647748, "raw_average_value_size": 3876, "num_data_blocks": 640, "num_entries": 3005, "num_filter_entries": 3005, "num_deletions": 756, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938870, "oldest_key_time": 1769938870, "file_creation_time": 1769938996, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 70064 microseconds, and 19946 cpu microseconds. Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.372934) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 11726491 bytes OK Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.373063) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.374990) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.375015) EVENT_LOG_v1 {"time_micros": 1769938996375009, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.375037) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 20016009, prev total WAL file size 20016009, number of live WAL files 2. Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.378983) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end) Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(11MB)], [15(8830KB)] Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938996379061, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 20768923, "oldest_snapshot_seqno": -1} Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10076 keys, 16967028 bytes, temperature: kUnknown Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938996483740, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 16967028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16905284, "index_size": 35467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25221, "raw_key_size": 267877, "raw_average_key_size": 26, "raw_value_size": 16728945, "raw_average_value_size": 1660, "num_data_blocks": 1376, "num_entries": 10076, "num_filter_entries": 10076, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938864, "oldest_key_time": 0, "file_creation_time": 1769938996, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.484200) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 16967028 bytes Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.486148) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.9 rd, 161.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(11.2, 8.6 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(3.2) write-amplify(1.4) OK, records in: 11596, records dropped: 1520 output_compression: NoCompression Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.486177) EVENT_LOG_v1 {"time_micros": 1769938996486165, "job": 6, "event": "compaction_finished", "compaction_time_micros": 104934, "compaction_time_cpu_micros": 46467, "output_level": 6, "num_output_files": 1, "total_output_size": 16967028, "num_input_records": 11596, "num_output_records": 10076, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938996488476, "job": 6, "event": "table_file_deletion", "file_number": 17} Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769938996489976, "job": 6, "event": "table_file_deletion", "file_number": 15} Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.378891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.490187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.490195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.490198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.490201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:16 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:16.490204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:16 localhost podman[286803]: 2026-02-01 09:43:16.691500235 +0000 UTC m=+0.086998030 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.component=rhceph-container, RELEASE=main) Feb 1 04:43:16 localhost podman[286803]: 2026-02-01 09:43:16.822989679 +0000 UTC m=+0.218487444 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, release=1764794109, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604210.localdomain.devices.0}] v 0) Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604210.localdomain}] v 0) Feb 1 04:43:17 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:17 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:43:17 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: [01/Feb/2026:09:43:17] ENGINE Bus STARTING Feb 1 04:43:18 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:18 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:18 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:18 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:18 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:18 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:18 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:18 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:43:18 localhost podman[287006]: 2026-02-01 09:43:18.547846345 +0000 UTC m=+0.089261470 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:43:18 localhost podman[287006]: 2026-02-01 09:43:18.562620179 +0000 UTC m=+0.104035284 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:43:18 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604210.localdomain.devices.0}] v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604210.localdomain}] v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005604210", "name": "osd_memory_target"} v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd/host:np0005604210", "name": "osd_memory_target"} : dispatch Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:43:18 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:43:19 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd/host:np0005604210", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd/host:np0005604210", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:43:19 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:43:19 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:43:19 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:43:19 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:43:19 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.conf Feb 1 04:43:19 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:43:19 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:43:19 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:43:19 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:43:20 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:43:20 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:43:20 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:43:20 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:43:20 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:43:20 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604213.caiaeh", "id": "np0005604213.caiaeh"} v 0) Feb 1 04:43:20 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604213.caiaeh", "id": "np0005604213.caiaeh"} : dispatch Feb 1 04:43:21 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:43:21 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:43:21 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:43:21 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:43:21 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:43:21 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:43:21 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:43:21 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604210.localdomain.devices.0}] v 0) Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604210.localdomain}] v 0) Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:43:21 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:43:22 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:43:22 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:43:22 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.162348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939002162389, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 654, "num_deletes": 256, "total_data_size": 2377265, "memory_usage": 2410704, "flush_reason": "Manual Compaction"} Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939002171417, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1502961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15327, "largest_seqno": 15976, "table_properties": {"data_size": 1499505, "index_size": 1311, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8247, "raw_average_key_size": 19, "raw_value_size": 1492156, "raw_average_value_size": 3470, "num_data_blocks": 51, "num_entries": 430, "num_filter_entries": 430, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938997, "oldest_key_time": 1769938997, "file_creation_time": 1769939002, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9114 microseconds, and 4138 cpu microseconds. Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.171461) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1502961 bytes OK Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.171483) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.175099) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.175122) EVENT_LOG_v1 {"time_micros": 1769939002175116, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.175141) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2373439, prev total WAL file size 2373439, number of live WAL files 2. Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.176013) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303034' seq:72057594037927935, type:22 .. '6B760031323631' seq:0, type:0; will stop at (end) Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1467KB)], [18(16MB)] Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939002176053, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18469989, "oldest_snapshot_seqno": -1} Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 9968 keys, 17441044 bytes, temperature: kUnknown Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939002277524, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 17441044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17380922, "index_size": 34101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 267328, "raw_average_key_size": 26, "raw_value_size": 17207177, "raw_average_value_size": 1726, "num_data_blocks": 1299, "num_entries": 9968, "num_filter_entries": 9968, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938864, "oldest_key_time": 0, "file_creation_time": 1769939002, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.277867) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 17441044 bytes Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.279680) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.8 rd, 171.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 16.2 +0.0 blob) out(16.6 +0.0 blob), read-write-amplify(23.9) write-amplify(11.6) OK, records in: 10506, records dropped: 538 output_compression: NoCompression Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.279714) EVENT_LOG_v1 {"time_micros": 1769939002279700, "job": 8, "event": "compaction_finished", "compaction_time_micros": 101587, "compaction_time_cpu_micros": 46301, "output_level": 6, "num_output_files": 1, "total_output_size": 17441044, "num_input_records": 10506, "num_output_records": 9968, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939002280069, "job": 8, "event": "table_file_deletion", "file_number": 20} Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939002282682, "job": 8, "event": "table_file_deletion", "file_number": 18} Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.175941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.282935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.282943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.282946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.282948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:22 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:43:22.282951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:43:22 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 1 04:43:22 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:43:22 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 1 04:43:22 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 1 04:43:22 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:43:22 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:43:22 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:43:22 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:22 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:43:23 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:43:23 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:43:23 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:43:23 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:43:23 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:43:23 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:43:23 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:43:23 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:43:23 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:43:23 localhost ceph-mon[278949]: Reconfiguring mon.np0005604212 (monmap changed)... Feb 1 04:43:23 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:43:23 localhost ceph-mon[278949]: [01/Feb/2026:09:43:22] ENGINE Error in 'start' listener >#012Traceback (most recent call last):#012 File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 230, in publish#012 output.append(listener(*args, **kwargs))#012 File "/lib/python3.9/site-packages/cherrypy/_cpserver.py", line 180, in start#012 super(Server, self).start()#012 File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 184, in start#012 self.wait()#012 File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 260, in wait#012 portend.occupied(*self.bound_addr, timeout=Timeouts.occupied)#012 File "/lib/python3.9/site-packages/portend.py", line 162, in occupied#012 raise Timeout("Port {port} not bound on {host}.".format(**locals()))#012portend.Timeout: Port 8765 not bound on 172.18.0.103. Feb 1 04:43:23 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:23 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:23 localhost ceph-mon[278949]: from='mgr.17208 172.18.0.103:0/3314697342' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:43:23 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:24 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:25 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:43:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:43:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:43:25 localhost podman[287742]: 2026-02-01 09:43:25.877403233 +0000 UTC m=+0.089948822 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:43:25 localhost podman[287742]: 2026-02-01 09:43:25.89164953 +0000 UTC m=+0.104195059 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:43:25 localhost systemd[1]: tmp-crun.udhjZn.mount: Deactivated successfully. Feb 1 04:43:25 localhost podman[287741]: 2026-02-01 09:43:25.924457868 +0000 UTC m=+0.137764942 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0) Feb 1 04:43:25 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:43:25 localhost podman[287741]: 2026-02-01 09:43:25.993925067 +0000 UTC m=+0.207232171 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:43:26 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:43:26 localhost ceph-mon[278949]: from='mgr.17208 ' entity='mgr.np0005604209.isqrps' Feb 1 04:43:28 localhost ceph-mon[278949]: [01/Feb/2026:09:43:27] ENGINE Error in 'start' listener >#012Traceback (most recent call last):#012 File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 230, in publish#012 output.append(listener(*args, **kwargs))#012 File "/lib/python3.9/site-packages/cherrypy/_cpserver.py", line 180, in start#012 super(Server, self).start()#012 File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 184, in start#012 self.wait()#012 File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 260, in wait#012 portend.occupied(*self.bound_addr, timeout=Timeouts.occupied)#012 File "/lib/python3.9/site-packages/portend.py", line 162, in occupied#012 raise Timeout("Port {port} not bound on {host}.".format(**locals()))#012portend.Timeout: Port 7150 not bound on 172.18.0.103. Feb 1 04:43:28 localhost ceph-mon[278949]: [01/Feb/2026:09:43:27] ENGINE Shutting down due to error in start listener:#012Traceback (most recent call last):#012 File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 268, in start#012 self.publish('start')#012 File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 248, in publish#012 raise exc#012cherrypy.process.wspbus.ChannelFailures: Timeout('Port 8765 not bound on 172.18.0.103.')#012Timeout('Port 7150 not bound on 172.18.0.103.') Feb 1 04:43:28 localhost ceph-mon[278949]: [01/Feb/2026:09:43:27] ENGINE Bus STOPPING Feb 1 04:43:28 localhost ceph-mon[278949]: [01/Feb/2026:09:43:27] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('172.18.0.103', 8765)) already shut down Feb 1 04:43:28 localhost ceph-mon[278949]: [01/Feb/2026:09:43:27] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('172.18.0.103', 7150)) already shut down Feb 1 04:43:28 localhost ceph-mon[278949]: [01/Feb/2026:09:43:27] ENGINE Bus STOPPED Feb 1 04:43:28 localhost ceph-mon[278949]: [01/Feb/2026:09:43:27] ENGINE Bus EXITING Feb 1 04:43:28 localhost ceph-mon[278949]: [01/Feb/2026:09:43:27] ENGINE Bus EXITED Feb 1 04:43:28 localhost ceph-mon[278949]: Failed to run cephadm http server: Timeout('Port 8765 not bound on 172.18.0.103.')#012Timeout('Port 7150 not bound on 172.18.0.103.') Feb 1 04:43:29 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:30 localhost podman[236852]: time="2026-02-01T09:43:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:43:30 localhost podman[236852]: @ - - [01/Feb/2026:09:43:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:43:30 localhost podman[236852]: @ - - [01/Feb/2026:09:43:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17780 "" "Go-http-client/1.1" Feb 1 04:43:31 localhost openstack_network_exporter[239388]: ERROR 09:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:43:31 localhost openstack_network_exporter[239388]: Feb 1 04:43:31 localhost openstack_network_exporter[239388]: ERROR 09:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:43:31 localhost openstack_network_exporter[239388]: Feb 1 04:43:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:43:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:43:33 localhost podman[287788]: 2026-02-01 09:43:33.856518725 +0000 UTC m=+0.072623719 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, release=1769056855, architecture=x86_64, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal) Feb 1 04:43:33 localhost podman[287788]: 2026-02-01 09:43:33.869703219 +0000 UTC m=+0.085808213 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1769056855, distribution-scope=public, version=9.7, build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64) Feb 1 04:43:33 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:43:33 localhost podman[287789]: 2026-02-01 09:43:33.918964153 +0000 UTC m=+0.132480285 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:43:33 localhost podman[287789]: 2026-02-01 09:43:33.95265815 +0000 UTC m=+0.166174262 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 1 04:43:33 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:43:34 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:37 localhost nova_compute[274317]: 2026-02-01 09:43:37.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:37 localhost nova_compute[274317]: 2026-02-01 09:43:37.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:43:37 localhost nova_compute[274317]: 2026-02-01 09:43:37.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:43:37 localhost nova_compute[274317]: 2026-02-01 09:43:37.115 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:43:39 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:39 localhost nova_compute[274317]: 2026-02-01 09:43:39.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:39 localhost nova_compute[274317]: 2026-02-01 09:43:39.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:40 localhost nova_compute[274317]: 2026-02-01 09:43:40.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:40 localhost nova_compute[274317]: 2026-02-01 09:43:40.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:43:41 localhost nova_compute[274317]: 2026-02-01 09:43:41.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:43:41.761 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:43:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:43:41.761 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:43:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:43:41.761 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:43:41 localhost podman[287824]: 2026-02-01 09:43:41.865172604 +0000 UTC m=+0.081758165 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:43:41 localhost podman[287824]: 2026-02-01 09:43:41.877471759 +0000 UTC m=+0.094057360 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 1 04:43:41 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:43:42 localhost nova_compute[274317]: 2026-02-01 09:43:42.095 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.141 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.142 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.143 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.143 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.143 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.608 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.815 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.817 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12408MB free_disk=0.0GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.817 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.818 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.880 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.880 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=0GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:43:43 localhost nova_compute[274317]: 2026-02-01 09:43:43.916 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:43:44 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:44 localhost nova_compute[274317]: 2026-02-01 09:43:44.349 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:43:44 localhost nova_compute[274317]: 2026-02-01 09:43:44.354 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:43:44 localhost nova_compute[274317]: 2026-02-01 09:43:44.415 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updated inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Feb 1 04:43:44 localhost nova_compute[274317]: 2026-02-01 09:43:44.415 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Feb 1 04:43:44 localhost nova_compute[274317]: 2026-02-01 09:43:44.416 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:43:44 localhost nova_compute[274317]: 2026-02-01 09:43:44.436 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:43:44 localhost nova_compute[274317]: 2026-02-01 09:43:44.437 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:43:45 localhost nova_compute[274317]: 2026-02-01 09:43:45.433 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:45 localhost nova_compute[274317]: 2026-02-01 09:43:45.453 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:45 localhost nova_compute[274317]: 2026-02-01 09:43:45.453 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:43:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:43:48 localhost systemd[1]: tmp-crun.8zd7tv.mount: Deactivated successfully. Feb 1 04:43:48 localhost podman[287887]: 2026-02-01 09:43:48.896057086 +0000 UTC m=+0.104999555 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:43:48 localhost podman[287887]: 2026-02-01 09:43:48.907743952 +0000 UTC m=+0.116686441 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:43:48 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:43:49 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:54 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:43:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:43:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:43:56 localhost systemd[1]: tmp-crun.cxkZIw.mount: Deactivated successfully. Feb 1 04:43:56 localhost podman[287911]: 2026-02-01 09:43:56.883490158 +0000 UTC m=+0.091386618 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller) Feb 1 04:43:56 localhost podman[287912]: 2026-02-01 09:43:56.952064938 +0000 UTC m=+0.157093408 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:43:56 localhost podman[287912]: 2026-02-01 09:43:56.96264835 +0000 UTC m=+0.167676800 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:43:56 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:43:56 localhost podman[287911]: 2026-02-01 09:43:56.984876728 +0000 UTC m=+0.192773198 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:43:57 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:43:59 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:00 localhost podman[236852]: time="2026-02-01T09:44:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:44:00 localhost podman[236852]: @ - - [01/Feb/2026:09:44:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:44:00 localhost podman[236852]: @ - - [01/Feb/2026:09:44:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17780 "" "Go-http-client/1.1" Feb 1 04:44:01 localhost openstack_network_exporter[239388]: ERROR 09:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:44:01 localhost openstack_network_exporter[239388]: Feb 1 04:44:01 localhost openstack_network_exporter[239388]: ERROR 09:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:44:01 localhost openstack_network_exporter[239388]: Feb 1 04:44:04 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:04 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e85 e85: 6 total, 6 up, 6 in Feb 1 04:44:04 localhost ceph-mon[278949]: from='client.? 172.18.0.200:0/534665898' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:44:04 localhost ceph-mon[278949]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:44:04 localhost ceph-mon[278949]: Activating manager daemon np0005604210.rirrtk Feb 1 04:44:04 localhost ceph-mon[278949]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 1 04:44:04 localhost systemd[1]: session-66.scope: Deactivated successfully. Feb 1 04:44:04 localhost systemd[1]: session-66.scope: Consumed 6.039s CPU time. Feb 1 04:44:04 localhost systemd-logind[761]: Session 66 logged out. Waiting for processes to exit. Feb 1 04:44:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:44:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:44:04 localhost systemd-logind[761]: Removed session 66. Feb 1 04:44:04 localhost systemd[1]: tmp-crun.aZo2Sc.mount: Deactivated successfully. Feb 1 04:44:04 localhost podman[287960]: 2026-02-01 09:44:04.353266216 +0000 UTC m=+0.092618276 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 1 04:44:04 localhost podman[287960]: 2026-02-01 09:44:04.385530468 +0000 UTC m=+0.124882478 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:44:04 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:44:04 localhost podman[287959]: 2026-02-01 09:44:04.403061418 +0000 UTC m=+0.142780149 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 04:44:04 localhost podman[287959]: 2026-02-01 09:44:04.419868375 +0000 UTC m=+0.159587106 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, container_name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., release=1769056855, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 1 04:44:04 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:44:04 localhost sshd[287998]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:44:04 localhost systemd-logind[761]: New session 67 of user ceph-admin. Feb 1 04:44:04 localhost systemd[1]: Started Session 67 of User ceph-admin. Feb 1 04:44:05 localhost ceph-mon[278949]: Manager daemon np0005604210.rirrtk is now available Feb 1 04:44:05 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604210.rirrtk/mirror_snapshot_schedule"} : dispatch Feb 1 04:44:05 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604210.rirrtk/trash_purge_schedule"} : dispatch Feb 1 04:44:05 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:05 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:05 localhost systemd[1]: tmp-crun.YLkpy2.mount: Deactivated successfully. Feb 1 04:44:05 localhost podman[288111]: 2026-02-01 09:44:05.686459399 +0000 UTC m=+0.102393412 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, version=7) Feb 1 04:44:05 localhost podman[288111]: 2026-02-01 09:44:05.813029659 +0000 UTC m=+0.228963652 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, release=1764794109, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Feb 1 04:44:06 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:06 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:07 localhost ceph-mon[278949]: [01/Feb/2026:09:44:05] ENGINE Bus STARTING Feb 1 04:44:07 localhost ceph-mon[278949]: [01/Feb/2026:09:44:05] ENGINE Serving on https://172.18.0.104:7150 Feb 1 04:44:07 localhost ceph-mon[278949]: [01/Feb/2026:09:44:05] ENGINE Client ('172.18.0.104', 39488) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:44:07 localhost ceph-mon[278949]: [01/Feb/2026:09:44:05] ENGINE Serving on http://172.18.0.104:8765 Feb 1 04:44:07 localhost ceph-mon[278949]: [01/Feb/2026:09:44:05] ENGINE Bus STARTED Feb 1 04:44:07 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 1 04:44:07 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 1 04:44:07 localhost ceph-mon[278949]: Cluster is now healthy Feb 1 04:44:07 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:07 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:07 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:07 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:07 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:07 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:07 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:07 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:44:08 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:44:08 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "config rm", "who": "osd/host:np0005604210", "name": "osd_memory_target"} : dispatch Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:44:08 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:44:08 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:44:08 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:44:08 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:44:08 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:44:08 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.conf Feb 1 04:44:08 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:44:08 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:44:08 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:44:08 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:44:09 localhost ceph-mon[278949]: mon.np0005604215@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:09 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:09 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:09 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:09 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:09 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:09 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c2f70f8f20 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Feb 1 04:44:09 localhost ceph-mon[278949]: mon.np0005604215@2(peon) e10 my rank is now 1 (was 2) Feb 1 04:44:09 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Feb 1 04:44:09 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Feb 1 04:44:09 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c30080e000 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Feb 1 04:44:09 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:44:09 localhost ceph-mon[278949]: paxos.1).electionLogic(44) init, last seen epoch 44 Feb 1 04:44:09 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:44:12 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:44:12 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:44:12 localhost podman[288991]: 2026-02-01 09:44:12.868510492 +0000 UTC m=+0.083978485 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute) Feb 1 04:44:12 localhost podman[288991]: 2026-02-01 09:44:12.90099585 +0000 UTC m=+0.116463833 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:44:12 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: Remove daemons mon.np0005604210 Feb 1 04:44:13 localhost ceph-mon[278949]: Safe to remove mon.np0005604210: new quorum should be ['np0005604211', 'np0005604215', 'np0005604213', 'np0005604212'] (from ['np0005604211', 'np0005604215', 'np0005604213', 'np0005604212']) Feb 1 04:44:13 localhost ceph-mon[278949]: Removing monitor np0005604210 from monmap... Feb 1 04:44:13 localhost ceph-mon[278949]: Removing daemon mon.np0005604210 from np0005604210.localdomain -- ports [] Feb 1 04:44:13 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:44:13 localhost ceph-mon[278949]: mon.np0005604213 calling monitor election Feb 1 04:44:13 localhost ceph-mon[278949]: mon.np0005604212 calling monitor election Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:44:13 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:44:13 localhost ceph-mon[278949]: mon.np0005604211 is new leader, mons np0005604211,np0005604215,np0005604213,np0005604212 in quorum (ranks 0,1,2,3) Feb 1 04:44:13 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:44:13 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:13 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:44:14 localhost ceph-mon[278949]: Deploying daemon mon.np0005604210 on np0005604210.localdomain Feb 1 04:44:14 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:14 localhost ceph-mon[278949]: Removed label mon from host np0005604210.localdomain Feb 1 04:44:15 localhost ceph-mon[278949]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 1 04:44:15 localhost ceph-mon[278949]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 1 04:44:15 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:15 localhost ceph-mon[278949]: Removed label mgr from host np0005604210.localdomain Feb 1 04:44:16 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e10 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:44:16 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e10 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:44:16 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:16 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:16 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:16 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:16 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:17 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e10 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:44:17 localhost ceph-mon[278949]: Removed label _admin from host np0005604210.localdomain Feb 1 04:44:18 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:18 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:18 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:44:18 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:18 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:44:19 localhost systemd[1]: tmp-crun.x2kI7g.mount: Deactivated successfully. Feb 1 04:44:19 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e10 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:44:19 localhost podman[289117]: 2026-02-01 09:44:19.072911063 +0000 UTC m=+0.068491230 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:44:19 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:19 localhost podman[289117]: 2026-02-01 09:44:19.107130886 +0000 UTC m=+0.102711003 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:44:19 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:44:19 localhost ceph-mon[278949]: Removing np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:19 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:44:19 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:44:19 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:44:19 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:44:19 localhost ceph-mon[278949]: Removing np0005604210.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:44:19 localhost ceph-mon[278949]: Removing np0005604210.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:44:19 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:19 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:19 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:19 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:19 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:44:19 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:19 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:20 localhost ceph-mon[278949]: Safe to remove mon.np0005604210: not in monmap (['np0005604211', 'np0005604215', 'np0005604213', 'np0005604212']) Feb 1 04:44:20 localhost ceph-mon[278949]: Removing monitor np0005604210 from monmap... Feb 1 04:44:20 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "mon rm", "name": "np0005604210"} : dispatch Feb 1 04:44:20 localhost ceph-mon[278949]: Removing daemon mon.np0005604210 from np0005604210.localdomain -- ports [] Feb 1 04:44:21 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e10 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:44:21 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e10 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:44:23 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:23 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:24 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:25 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:25 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:25 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:44:25 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:25 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:25 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:25 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:25 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604210.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:44:26 localhost ceph-mon[278949]: Reconfiguring crash.np0005604210 (monmap changed)... Feb 1 04:44:26 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604210 on np0005604210.localdomain Feb 1 04:44:26 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:26 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:26 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604210.rirrtk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:44:27 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604210.rirrtk (monmap changed)... Feb 1 04:44:27 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604210.rirrtk on np0005604210.localdomain Feb 1 04:44:27 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:27 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:27 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:27 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:27 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:44:27 localhost podman[289407]: 2026-02-01 09:44:27.87042076 +0000 UTC m=+0.082894791 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Feb 1 04:44:27 localhost podman[289408]: 2026-02-01 09:44:27.917415494 +0000 UTC m=+0.125991852 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:44:27 localhost podman[289408]: 2026-02-01 09:44:27.929583116 +0000 UTC m=+0.138159454 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:44:27 localhost podman[289407]: 2026-02-01 09:44:27.929828453 +0000 UTC m=+0.142302484 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:44:27 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:44:27 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:44:28 localhost ceph-mon[278949]: Reconfiguring mon.np0005604211 (monmap changed)... Feb 1 04:44:28 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604211 on np0005604211.localdomain Feb 1 04:44:28 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:28 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:28 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:44:28 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:28 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:28 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:28 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:28 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:44:29 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:29 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:44:29 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:44:29 localhost ceph-mon[278949]: Added label _no_schedule to host np0005604210.localdomain Feb 1 04:44:29 localhost ceph-mon[278949]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005604210.localdomain Feb 1 04:44:29 localhost ceph-mon[278949]: Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:44:29 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:44:29 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:29 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:29 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:44:29 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:30 localhost podman[236852]: time="2026-02-01T09:44:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:44:30 localhost podman[236852]: @ - - [01/Feb/2026:09:44:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:44:30 localhost podman[236852]: @ - - [01/Feb/2026:09:44:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17787 "" "Go-http-client/1.1" Feb 1 04:44:30 localhost ceph-mon[278949]: Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:44:30 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:44:30 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:30 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:30 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:44:31 localhost ceph-mon[278949]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:44:31 localhost ceph-mon[278949]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:44:31 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:31 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604210.localdomain"} : dispatch Feb 1 04:44:31 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604210.localdomain"}]': finished Feb 1 04:44:31 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:31 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:31 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:44:31 localhost openstack_network_exporter[239388]: ERROR 09:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:44:31 localhost openstack_network_exporter[239388]: Feb 1 04:44:31 localhost openstack_network_exporter[239388]: ERROR 09:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:44:31 localhost openstack_network_exporter[239388]: Feb 1 04:44:32 localhost ceph-mon[278949]: Removed host np0005604210.localdomain Feb 1 04:44:32 localhost ceph-mon[278949]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:44:32 localhost ceph-mon[278949]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:44:32 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:32 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' Feb 1 04:44:32 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:44:32 localhost ceph-mon[278949]: from='mgr.24104 172.18.0.104:0/238177948' entity='mgr.np0005604210.rirrtk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:44:32 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:44:34 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:44:34 localhost podman[289457]: 2026-02-01 09:44:34.870448324 +0000 UTC m=+0.080400122 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9/ubi-minimal, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1769056855, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.) Feb 1 04:44:34 localhost podman[289457]: 2026-02-01 09:44:34.883373299 +0000 UTC m=+0.093325087 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, version=9.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc.) Feb 1 04:44:34 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:44:34 localhost systemd[1]: tmp-crun.vdAg7J.mount: Deactivated successfully. Feb 1 04:44:34 localhost podman[289458]: 2026-02-01 09:44:34.928890077 +0000 UTC m=+0.134749186 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 1 04:44:34 localhost podman[289458]: 2026-02-01 09:44:34.963787422 +0000 UTC m=+0.169646521 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127) Feb 1 04:44:34 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:44:39 localhost nova_compute[274317]: 2026-02-01 09:44:39.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:44:39 localhost nova_compute[274317]: 2026-02-01 09:44:39.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:44:39 localhost nova_compute[274317]: 2026-02-01 09:44:39.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:44:39 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:39 localhost nova_compute[274317]: 2026-02-01 09:44:39.120 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:44:41 localhost nova_compute[274317]: 2026-02-01 09:44:41.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:44:41 localhost nova_compute[274317]: 2026-02-01 09:44:41.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:44:41 localhost nova_compute[274317]: 2026-02-01 09:44:41.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:44:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:44:41.762 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:44:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:44:41.762 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:44:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:44:41.763 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:44:42 localhost nova_compute[274317]: 2026-02-01 09:44:42.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:44:42 localhost nova_compute[274317]: 2026-02-01 09:44:42.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:44:42 localhost nova_compute[274317]: 2026-02-01 09:44:42.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:44:43 localhost podman[289495]: 2026-02-01 09:44:43.865835991 +0000 UTC m=+0.082096195 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:44:43 localhost podman[289495]: 2026-02-01 09:44:43.882876785 +0000 UTC m=+0.099137009 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Feb 1 04:44:43 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:44:44 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.131 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.132 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.132 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.132 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.133 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:44:44 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:44:44 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2305914736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.578 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.758 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.760 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12380MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.760 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.761 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.843 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.843 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:44:44 localhost nova_compute[274317]: 2026-02-01 09:44:44.868 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:44:45 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:44:45 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4166851382' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:44:45 localhost nova_compute[274317]: 2026-02-01 09:44:45.304 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:44:45 localhost nova_compute[274317]: 2026-02-01 09:44:45.310 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:44:45 localhost nova_compute[274317]: 2026-02-01 09:44:45.375 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updated inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Feb 1 04:44:45 localhost nova_compute[274317]: 2026-02-01 09:44:45.375 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Feb 1 04:44:45 localhost nova_compute[274317]: 2026-02-01 09:44:45.376 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:44:45 localhost nova_compute[274317]: 2026-02-01 09:44:45.406 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:44:45 localhost nova_compute[274317]: 2026-02-01 09:44:45.407 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:44:46 localhost nova_compute[274317]: 2026-02-01 09:44:46.407 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:44:49 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:44:49 localhost podman[289558]: 2026-02-01 09:44:49.860244426 +0000 UTC m=+0.076059367 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:44:49 localhost podman[289558]: 2026-02-01 09:44:49.893537359 +0000 UTC m=+0.109352310 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:44:49 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:44:54 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:44:58 localhost podman[289582]: 2026-02-01 09:44:58.863384153 +0000 UTC m=+0.075464318 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:44:58 localhost podman[289582]: 2026-02-01 09:44:58.900665743 +0000 UTC m=+0.112745908 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:44:58 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:44:58 localhost podman[289581]: 2026-02-01 09:44:58.915391894 +0000 UTC m=+0.128776200 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Feb 1 04:44:58 localhost podman[289581]: 2026-02-01 09:44:58.97580943 +0000 UTC m=+0.189193756 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 04:44:58 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:44:59 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:00 localhost podman[236852]: time="2026-02-01T09:45:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:45:00 localhost podman[236852]: @ - - [01/Feb/2026:09:45:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:45:00 localhost podman[236852]: @ - - [01/Feb/2026:09:45:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17780 "" "Go-http-client/1.1" Feb 1 04:45:01 localhost openstack_network_exporter[239388]: ERROR 09:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:45:01 localhost openstack_network_exporter[239388]: Feb 1 04:45:01 localhost openstack_network_exporter[239388]: ERROR 09:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:45:01 localhost openstack_network_exporter[239388]: Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:45:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:45:04 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:45:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:45:05 localhost podman[289629]: 2026-02-01 09:45:05.866539165 +0000 UTC m=+0.077494641 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, managed_by=edpm_ansible, version=9.7, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 1 04:45:05 localhost podman[289629]: 2026-02-01 09:45:05.880622096 +0000 UTC m=+0.091577562 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.7, release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, distribution-scope=public, name=ubi9/ubi-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z) Feb 1 04:45:05 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:45:05 localhost systemd[1]: tmp-crun.W84p8b.mount: Deactivated successfully. Feb 1 04:45:05 localhost podman[289630]: 2026-02-01 09:45:05.926502156 +0000 UTC m=+0.134064516 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:45:05 localhost podman[289630]: 2026-02-01 09:45:05.956228417 +0000 UTC m=+0.163790797 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 1 04:45:05 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:45:07 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e86 e86: 6 total, 6 up, 6 in Feb 1 04:45:07 localhost ceph-mon[278949]: Activating manager daemon np0005604212.oynhpm Feb 1 04:45:07 localhost ceph-mon[278949]: Manager daemon np0005604210.rirrtk is unresponsive, replacing it with standby daemon np0005604212.oynhpm Feb 1 04:45:07 localhost sshd[289666]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:45:07 localhost systemd-logind[761]: New session 68 of user ceph-admin. Feb 1 04:45:07 localhost systemd[1]: Started Session 68 of User ceph-admin. Feb 1 04:45:08 localhost ceph-mon[278949]: Manager daemon np0005604212.oynhpm is now available Feb 1 04:45:08 localhost ceph-mon[278949]: removing stray HostCache host record np0005604210.localdomain.devices.0 Feb 1 04:45:08 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604210.localdomain.devices.0"} : dispatch Feb 1 04:45:08 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604210.localdomain.devices.0"}]': finished Feb 1 04:45:08 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604210.localdomain.devices.0"} : dispatch Feb 1 04:45:08 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604210.localdomain.devices.0"}]': finished Feb 1 04:45:08 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604212.oynhpm/mirror_snapshot_schedule"} : dispatch Feb 1 04:45:08 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604212.oynhpm/trash_purge_schedule"} : dispatch Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.530914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939108530957, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2394, "num_deletes": 256, "total_data_size": 8559110, "memory_usage": 9439008, "flush_reason": "Manual Compaction"} Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939108565914, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 5261577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15981, "largest_seqno": 18370, "table_properties": {"data_size": 5252204, "index_size": 5558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 24676, "raw_average_key_size": 22, "raw_value_size": 5231466, "raw_average_value_size": 4764, "num_data_blocks": 238, "num_entries": 1098, "num_filter_entries": 1098, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939002, "oldest_key_time": 1769939002, "file_creation_time": 1769939108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 35055 microseconds, and 10883 cpu microseconds. Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.565967) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 5261577 bytes OK Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.565992) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.567665) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.567689) EVENT_LOG_v1 {"time_micros": 1769939108567681, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.567711) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 8547676, prev total WAL file size 8547676, number of live WAL files 2. Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.569492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(5138KB)], [21(16MB)] Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939108569573, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 22702621, "oldest_snapshot_seqno": -1} Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10517 keys, 19394522 bytes, temperature: kUnknown Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939108701448, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 19394522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19331318, "index_size": 35839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26309, "raw_key_size": 280607, "raw_average_key_size": 26, "raw_value_size": 19148558, "raw_average_value_size": 1820, "num_data_blocks": 1377, "num_entries": 10517, "num_filter_entries": 10517, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938864, "oldest_key_time": 0, "file_creation_time": 1769939108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.701874) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 19394522 bytes Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.703841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.0 rd, 146.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.0, 16.6 +0.0 blob) out(18.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 11066, records dropped: 549 output_compression: NoCompression Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.703872) EVENT_LOG_v1 {"time_micros": 1769939108703859, "job": 10, "event": "compaction_finished", "compaction_time_micros": 132027, "compaction_time_cpu_micros": 48906, "output_level": 6, "num_output_files": 1, "total_output_size": 19394522, "num_input_records": 11066, "num_output_records": 10517, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939108704825, "job": 10, "event": "table_file_deletion", "file_number": 23} Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939108707719, "job": 10, "event": "table_file_deletion", "file_number": 21} Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.569384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.707761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.707766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.707768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.707770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:08 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:08.707772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:09 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:09 localhost systemd[1]: tmp-crun.vKjP1t.mount: Deactivated successfully. Feb 1 04:45:09 localhost podman[289830]: 2026-02-01 09:45:09.321217584 +0000 UTC m=+0.094006929 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, build-date=2025-12-08T17:28:53Z, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=rhceph) Feb 1 04:45:09 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:09 localhost ceph-mon[278949]: Saving service mon spec with placement label:mon Feb 1 04:45:09 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:09 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:09 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:09 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:09 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:09 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:09 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:09 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:09 localhost ceph-mon[278949]: [01/Feb/2026:09:45:08] ENGINE Bus STARTING Feb 1 04:45:09 localhost podman[289830]: 2026-02-01 09:45:09.445950737 +0000 UTC m=+0.218740122 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:45:10 localhost ceph-mon[278949]: [01/Feb/2026:09:45:08] ENGINE Serving on https://172.18.0.106:7150 Feb 1 04:45:10 localhost ceph-mon[278949]: [01/Feb/2026:09:45:08] ENGINE Client ('172.18.0.106', 47908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:45:10 localhost ceph-mon[278949]: [01/Feb/2026:09:45:09] ENGINE Serving on http://172.18.0.106:8765 Feb 1 04:45:10 localhost ceph-mon[278949]: [01/Feb/2026:09:45:09] ENGINE Bus STARTED Feb 1 04:45:10 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 1 04:45:10 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 1 04:45:10 localhost ceph-mon[278949]: Cluster is now healthy Feb 1 04:45:10 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:10 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:10 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:10 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:10 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:10 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:10 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:10 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:11 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55c30080e160 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Feb 1 04:45:11 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:45:11 localhost ceph-mon[278949]: paxos.1).electionLogic(46) init, last seen epoch 46 Feb 1 04:45:11 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:11 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:45:14 localhost podman[290076]: 2026-02-01 09:45:14.874324139 +0000 UTC m=+0.080673032 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true) Feb 1 04:45:14 localhost podman[290076]: 2026-02-01 09:45:14.91265332 +0000 UTC m=+0.119002173 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 1 04:45:14 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:45:16 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:45:16 localhost ceph-mon[278949]: paxos.1).electionLogic(49) init, last seen epoch 49, mid-election, bumping Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:16 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:45:16 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:45:16 localhost ceph-mon[278949]: Remove daemons mon.np0005604213 Feb 1 04:45:16 localhost ceph-mon[278949]: Safe to remove mon.np0005604213: new quorum should be ['np0005604211', 'np0005604215', 'np0005604212'] (from ['np0005604211', 'np0005604215', 'np0005604212']) Feb 1 04:45:16 localhost ceph-mon[278949]: Removing monitor np0005604213 from monmap... Feb 1 04:45:16 localhost ceph-mon[278949]: Removing daemon mon.np0005604213 from np0005604213.localdomain -- ports [] Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604212 calling monitor election Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604211 is new leader, mons np0005604211,np0005604215 in quorum (ranks 0,1) Feb 1 04:45:16 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:45:16 localhost ceph-mon[278949]: mon.np0005604211 is new leader, mons np0005604211,np0005604215,np0005604212 in quorum (ranks 0,1,2) Feb 1 04:45:16 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:45:16 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:16 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:17 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:17 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:45:17 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:17 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:45:17 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604213.localdomain to 1348M Feb 1 04:45:17 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:45:17 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:45:17 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:45:17 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:45:17 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:17 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:17 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:17 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:17 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:17 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:45:18 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:18 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:18 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:18 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:19 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:19 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:19 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:19 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:19 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:19 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:19 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:19 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:19 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:19 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:19 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:19 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:19 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:19 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:19 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:19 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:19 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:19 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:20 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:45:20 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:45:20 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:45:20 localhost ceph-mon[278949]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 1 04:45:20 localhost ceph-mon[278949]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 1 04:45:20 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:20 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:20 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:45:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:45:20 localhost podman[290753]: 2026-02-01 09:45:20.867753462 +0000 UTC m=+0.080295208 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:45:20 localhost podman[290753]: 2026-02-01 09:45:20.903483213 +0000 UTC m=+0.116024939 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:45:20 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:45:21 localhost ceph-mon[278949]: Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:45:21 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:45:21 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:21 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:21 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:45:22 localhost ceph-mon[278949]: Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:45:22 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:45:22 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:22 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:22 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:45:22 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:23 localhost ceph-mon[278949]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:45:23 localhost ceph-mon[278949]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:45:23 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:23 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:23 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:45:24 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:24 localhost ceph-mon[278949]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:45:24 localhost ceph-mon[278949]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:45:24 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:24 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:45:24 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:24 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:24 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:45:25 localhost ceph-mon[278949]: Deploying daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:45:25 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:45:25 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:45:25 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:25 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:25 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:45:26 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:45:26 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:45:26 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:26 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:26 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:45:27 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:27 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:27 localhost ceph-mon[278949]: Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:45:27 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:45:27 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:27 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:28 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:28 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:28 localhost ceph-mon[278949]: Reconfiguring osd.0 (monmap changed)... Feb 1 04:45:28 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:28 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:45:28 localhost ceph-mon[278949]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:45:29 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:45:29 localhost podman[290776]: 2026-02-01 09:45:29.87219101 +0000 UTC m=+0.083365555 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:45:29 localhost podman[290777]: 2026-02-01 09:45:29.923990055 +0000 UTC m=+0.132130585 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:45:29 localhost podman[290776]: 2026-02-01 09:45:29.935735063 +0000 UTC m=+0.146909568 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:45:29 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:45:29 localhost podman[290777]: 2026-02-01 09:45:29.95859381 +0000 UTC m=+0.166734300 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:45:29 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:45:29 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:29 localhost ceph-mon[278949]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:45:29 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:29 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:45:29 localhost ceph-mon[278949]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:45:30 localhost podman[236852]: time="2026-02-01T09:45:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:45:30 localhost podman[236852]: @ - - [01/Feb/2026:09:45:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:45:30 localhost podman[236852]: @ - - [01/Feb/2026:09:45:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17775 "" "Go-http-client/1.1" Feb 1 04:45:30 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:31 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:31 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:45:31 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:31 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:45:31 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:45:31 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:31 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:31 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:45:31 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:31 localhost openstack_network_exporter[239388]: ERROR 09:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:45:31 localhost openstack_network_exporter[239388]: Feb 1 04:45:31 localhost openstack_network_exporter[239388]: ERROR 09:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:45:31 localhost openstack_network_exporter[239388]: Feb 1 04:45:32 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:45:32 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:45:32 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:32 localhost ceph-mon[278949]: Reconfiguring crash.np0005604215 (monmap changed)... Feb 1 04:45:32 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:32 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604215 on np0005604215.localdomain Feb 1 04:45:32 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:45:32 localhost podman[290878]: Feb 1 04:45:32 localhost podman[290878]: 2026-02-01 09:45:32.469073147 +0000 UTC m=+0.074945141 container create 51ed8bd914c675be908c916b22d4ee95f4888429ac532c094cfb66bf6bf4d9f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_faraday, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:45:32 localhost systemd[1]: Started libpod-conmon-51ed8bd914c675be908c916b22d4ee95f4888429ac532c094cfb66bf6bf4d9f7.scope. Feb 1 04:45:32 localhost systemd[1]: Started libcrun container. Feb 1 04:45:32 localhost podman[290878]: 2026-02-01 09:45:32.438655034 +0000 UTC m=+0.044527058 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:45:32 localhost podman[290878]: 2026-02-01 09:45:32.54089468 +0000 UTC m=+0.146766674 container init 51ed8bd914c675be908c916b22d4ee95f4888429ac532c094cfb66bf6bf4d9f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_faraday, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, version=7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:45:32 localhost podman[290878]: 2026-02-01 09:45:32.550040416 +0000 UTC m=+0.155912410 container start 51ed8bd914c675be908c916b22d4ee95f4888429ac532c094cfb66bf6bf4d9f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_faraday, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, release=1764794109, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Feb 1 04:45:32 localhost podman[290878]: 2026-02-01 09:45:32.550353726 +0000 UTC m=+0.156225730 container attach 51ed8bd914c675be908c916b22d4ee95f4888429ac532c094cfb66bf6bf4d9f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_faraday, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, release=1764794109, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-12-08T17:28:53Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:45:32 localhost vigilant_faraday[290893]: 167 167 Feb 1 04:45:32 localhost systemd[1]: libpod-51ed8bd914c675be908c916b22d4ee95f4888429ac532c094cfb66bf6bf4d9f7.scope: Deactivated successfully. Feb 1 04:45:32 localhost podman[290878]: 2026-02-01 09:45:32.554205958 +0000 UTC m=+0.160077982 container died 51ed8bd914c675be908c916b22d4ee95f4888429ac532c094cfb66bf6bf4d9f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_faraday, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, release=1764794109, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, architecture=x86_64, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:45:32 localhost podman[290898]: 2026-02-01 09:45:32.650077134 +0000 UTC m=+0.083879462 container remove 51ed8bd914c675be908c916b22d4ee95f4888429ac532c094cfb66bf6bf4d9f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_faraday, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.expose-services=, name=rhceph, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Feb 1 04:45:32 localhost systemd[1]: libpod-conmon-51ed8bd914c675be908c916b22d4ee95f4888429ac532c094cfb66bf6bf4d9f7.scope: Deactivated successfully. Feb 1 04:45:33 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:33 localhost ceph-mon[278949]: Reconfiguring osd.2 (monmap changed)... Feb 1 04:45:33 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:33 localhost ceph-mon[278949]: Reconfiguring daemon osd.2 on np0005604215.localdomain Feb 1 04:45:33 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 1 04:45:33 localhost podman[290967]: Feb 1 04:45:33 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:33 localhost podman[290967]: 2026-02-01 09:45:33.339243799 +0000 UTC m=+0.078492463 container create d69ec2dfcb2168f33878466928913d23eacb02a9e87f4b9a34ca7c807b6f4053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_taussig, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, release=1764794109, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, vcs-type=git, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:45:33 localhost systemd[1]: Started libpod-conmon-d69ec2dfcb2168f33878466928913d23eacb02a9e87f4b9a34ca7c807b6f4053.scope. Feb 1 04:45:33 localhost systemd[1]: Started libcrun container. Feb 1 04:45:33 localhost podman[290967]: 2026-02-01 09:45:33.398436565 +0000 UTC m=+0.137685209 container init d69ec2dfcb2168f33878466928913d23eacb02a9e87f4b9a34ca7c807b6f4053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_taussig, release=1764794109, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:45:33 localhost podman[290967]: 2026-02-01 09:45:33.305226412 +0000 UTC m=+0.044475066 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:45:33 localhost podman[290967]: 2026-02-01 09:45:33.407493989 +0000 UTC m=+0.146742633 container start d69ec2dfcb2168f33878466928913d23eacb02a9e87f4b9a34ca7c807b6f4053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_taussig, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:45:33 localhost podman[290967]: 2026-02-01 09:45:33.408355277 +0000 UTC m=+0.147603971 container attach d69ec2dfcb2168f33878466928913d23eacb02a9e87f4b9a34ca7c807b6f4053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_taussig, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, distribution-scope=public) Feb 1 04:45:33 localhost exciting_taussig[290982]: 167 167 Feb 1 04:45:33 localhost systemd[1]: libpod-d69ec2dfcb2168f33878466928913d23eacb02a9e87f4b9a34ca7c807b6f4053.scope: Deactivated successfully. Feb 1 04:45:33 localhost podman[290967]: 2026-02-01 09:45:33.412982501 +0000 UTC m=+0.152231155 container died d69ec2dfcb2168f33878466928913d23eacb02a9e87f4b9a34ca7c807b6f4053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_taussig, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , name=rhceph, ceph=True, architecture=x86_64, vendor=Red Hat, Inc.) Feb 1 04:45:33 localhost systemd[1]: var-lib-containers-storage-overlay-1cd2cd6bf2747b23efca7474a35f9a1c7a2fcce6a5e2396504a7559da09269ca-merged.mount: Deactivated successfully. Feb 1 04:45:33 localhost systemd[1]: var-lib-containers-storage-overlay-e5c542e879b2b5ddc744aeaaafaa0866b2d1dc1e2de2f1a91b048c438090c101-merged.mount: Deactivated successfully. Feb 1 04:45:33 localhost podman[290987]: 2026-02-01 09:45:33.513920377 +0000 UTC m=+0.091797330 container remove d69ec2dfcb2168f33878466928913d23eacb02a9e87f4b9a34ca7c807b6f4053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_taussig, version=7, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, release=1764794109, com.redhat.component=rhceph-container) Feb 1 04:45:33 localhost systemd[1]: libpod-conmon-d69ec2dfcb2168f33878466928913d23eacb02a9e87f4b9a34ca7c807b6f4053.scope: Deactivated successfully. Feb 1 04:45:34 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:34 localhost ceph-mon[278949]: Reconfiguring osd.5 (monmap changed)... Feb 1 04:45:34 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:34 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 1 04:45:34 localhost ceph-mon[278949]: Reconfiguring daemon osd.5 on np0005604215.localdomain Feb 1 04:45:34 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:34 localhost podman[291062]: Feb 1 04:45:34 localhost podman[291062]: 2026-02-01 09:45:34.289393968 +0000 UTC m=+0.067968582 container create 00fcbaad54d882197224adadbb2c0185ae7671cb035c87672c991cc15b63dd89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_villani, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=rhceph, ceph=True, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:45:34 localhost systemd[1]: Started libpod-conmon-00fcbaad54d882197224adadbb2c0185ae7671cb035c87672c991cc15b63dd89.scope. Feb 1 04:45:34 localhost systemd[1]: Started libcrun container. Feb 1 04:45:34 localhost podman[291062]: 2026-02-01 09:45:34.351251278 +0000 UTC m=+0.129825882 container init 00fcbaad54d882197224adadbb2c0185ae7671cb035c87672c991cc15b63dd89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_villani, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, description=Red Hat Ceph Storage 7, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:45:34 localhost frosty_villani[291077]: 167 167 Feb 1 04:45:34 localhost podman[291062]: 2026-02-01 09:45:34.359052453 +0000 UTC m=+0.137627067 container start 00fcbaad54d882197224adadbb2c0185ae7671cb035c87672c991cc15b63dd89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_villani, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, description=Red Hat Ceph Storage 7, release=1764794109, architecture=x86_64) Feb 1 04:45:34 localhost systemd[1]: libpod-00fcbaad54d882197224adadbb2c0185ae7671cb035c87672c991cc15b63dd89.scope: Deactivated successfully. Feb 1 04:45:34 localhost podman[291062]: 2026-02-01 09:45:34.359373903 +0000 UTC m=+0.137948567 container attach 00fcbaad54d882197224adadbb2c0185ae7671cb035c87672c991cc15b63dd89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_villani, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7) Feb 1 04:45:34 localhost podman[291062]: 2026-02-01 09:45:34.362021566 +0000 UTC m=+0.140596230 container died 00fcbaad54d882197224adadbb2c0185ae7671cb035c87672c991cc15b63dd89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_villani, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_BRANCH=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:45:34 localhost podman[291062]: 2026-02-01 09:45:34.267883704 +0000 UTC m=+0.046458308 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:45:34 localhost podman[291082]: 2026-02-01 09:45:34.441203199 +0000 UTC m=+0.070576414 container remove 00fcbaad54d882197224adadbb2c0185ae7671cb035c87672c991cc15b63dd89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_villani, GIT_CLEAN=True, io.openshift.expose-services=, release=1764794109, version=7, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux ) Feb 1 04:45:34 localhost systemd[1]: libpod-conmon-00fcbaad54d882197224adadbb2c0185ae7671cb035c87672c991cc15b63dd89.scope: Deactivated successfully. Feb 1 04:45:34 localhost systemd[1]: var-lib-containers-storage-overlay-415a315a48d84418b498e2aca14aae75294608a60201d927b2fd819706f75aa4-merged.mount: Deactivated successfully. Feb 1 04:45:34 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:45:34 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/108569471' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:45:34 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:45:34 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/108569471' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:45:35 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:35 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:45:35 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:35 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:45:35 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:45:35 localhost podman[291161]: Feb 1 04:45:35 localhost podman[291161]: 2026-02-01 09:45:35.245215076 +0000 UTC m=+0.079433312 container create e3161dbb95da6ec54506b538985bfe8a3aae89ffba7f69a8320fc96fe7a64490 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_banzai, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, release=1764794109, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git) Feb 1 04:45:35 localhost systemd[1]: Started libpod-conmon-e3161dbb95da6ec54506b538985bfe8a3aae89ffba7f69a8320fc96fe7a64490.scope. Feb 1 04:45:35 localhost systemd[1]: Started libcrun container. Feb 1 04:45:35 localhost podman[291161]: 2026-02-01 09:45:35.309871405 +0000 UTC m=+0.144089661 container init e3161dbb95da6ec54506b538985bfe8a3aae89ffba7f69a8320fc96fe7a64490 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_banzai, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7) Feb 1 04:45:35 localhost podman[291161]: 2026-02-01 09:45:35.214152652 +0000 UTC m=+0.048370878 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:45:35 localhost podman[291161]: 2026-02-01 09:45:35.322526841 +0000 UTC m=+0.156745067 container start e3161dbb95da6ec54506b538985bfe8a3aae89ffba7f69a8320fc96fe7a64490 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_banzai, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, release=1764794109, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=) Feb 1 04:45:35 localhost podman[291161]: 2026-02-01 09:45:35.323028037 +0000 UTC m=+0.157246273 container attach e3161dbb95da6ec54506b538985bfe8a3aae89ffba7f69a8320fc96fe7a64490 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_banzai, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, ceph=True, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux ) Feb 1 04:45:35 localhost wizardly_banzai[291176]: 167 167 Feb 1 04:45:35 localhost systemd[1]: libpod-e3161dbb95da6ec54506b538985bfe8a3aae89ffba7f69a8320fc96fe7a64490.scope: Deactivated successfully. Feb 1 04:45:35 localhost podman[291161]: 2026-02-01 09:45:35.325560646 +0000 UTC m=+0.159778892 container died e3161dbb95da6ec54506b538985bfe8a3aae89ffba7f69a8320fc96fe7a64490 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_banzai, version=7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, vcs-type=git, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:45:35 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:35 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:35 localhost podman[291181]: 2026-02-01 09:45:35.424051075 +0000 UTC m=+0.084873433 container remove e3161dbb95da6ec54506b538985bfe8a3aae89ffba7f69a8320fc96fe7a64490 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_banzai, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., name=rhceph, GIT_BRANCH=main, distribution-scope=public, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, release=1764794109, RELEASE=main, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph) Feb 1 04:45:35 localhost systemd[1]: libpod-conmon-e3161dbb95da6ec54506b538985bfe8a3aae89ffba7f69a8320fc96fe7a64490.scope: Deactivated successfully. Feb 1 04:45:35 localhost systemd[1]: tmp-crun.Rx9K4l.mount: Deactivated successfully. Feb 1 04:45:35 localhost systemd[1]: var-lib-containers-storage-overlay-4173a90f4465ff5bee1af091f92297faaeefa0af6ab9281e766acb34dad7fd86-merged.mount: Deactivated successfully. Feb 1 04:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:45:36 localhost nova_compute[274317]: 2026-02-01 09:45:36.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:36 localhost nova_compute[274317]: 2026-02-01 09:45:36.102 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 1 04:45:36 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:36 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604215.uhhqtv (monmap changed)... Feb 1 04:45:36 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:36 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:45:36 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604215.uhhqtv on np0005604215.localdomain Feb 1 04:45:36 localhost podman[291248]: 2026-02-01 09:45:36.125361052 +0000 UTC m=+0.079894788 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:45:36 localhost podman[291248]: 2026-02-01 09:45:36.139487845 +0000 UTC m=+0.094021581 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:45:36 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:45:36 localhost podman[291249]: 2026-02-01 09:45:36.191424733 +0000 UTC m=+0.145646229 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Feb 1 04:45:36 localhost podman[291249]: 2026-02-01 09:45:36.199683082 +0000 UTC m=+0.153904628 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Feb 1 04:45:36 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:45:36 localhost podman[291262]: Feb 1 04:45:36 localhost podman[291262]: 2026-02-01 09:45:36.221247229 +0000 UTC m=+0.147167557 container create b322f0cf8fae886545ef230189cf10c954be561f331441de40d3dc40748c1fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_faraday, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:45:36 localhost podman[291262]: 2026-02-01 09:45:36.136922964 +0000 UTC m=+0.062843372 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:45:36 localhost systemd[1]: Started libpod-conmon-b322f0cf8fae886545ef230189cf10c954be561f331441de40d3dc40748c1fd4.scope. Feb 1 04:45:36 localhost systemd[1]: Started libcrun container. Feb 1 04:45:36 localhost podman[291262]: 2026-02-01 09:45:36.284243704 +0000 UTC m=+0.210164032 container init b322f0cf8fae886545ef230189cf10c954be561f331441de40d3dc40748c1fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_faraday, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, release=1764794109, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, ceph=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:45:36 localhost podman[291262]: 2026-02-01 09:45:36.295109165 +0000 UTC m=+0.221029513 container start b322f0cf8fae886545ef230189cf10c954be561f331441de40d3dc40748c1fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_faraday, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.41.4, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc.) Feb 1 04:45:36 localhost podman[291262]: 2026-02-01 09:45:36.295435185 +0000 UTC m=+0.221355533 container attach b322f0cf8fae886545ef230189cf10c954be561f331441de40d3dc40748c1fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_faraday, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, architecture=x86_64, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, release=1764794109, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z) Feb 1 04:45:36 localhost brave_faraday[291305]: 167 167 Feb 1 04:45:36 localhost systemd[1]: libpod-b322f0cf8fae886545ef230189cf10c954be561f331441de40d3dc40748c1fd4.scope: Deactivated successfully. Feb 1 04:45:36 localhost podman[291262]: 2026-02-01 09:45:36.298558563 +0000 UTC m=+0.224478911 container died b322f0cf8fae886545ef230189cf10c954be561f331441de40d3dc40748c1fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_faraday, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1764794109, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, distribution-scope=public) Feb 1 04:45:36 localhost podman[291310]: 2026-02-01 09:45:36.389912878 +0000 UTC m=+0.078761111 container remove b322f0cf8fae886545ef230189cf10c954be561f331441de40d3dc40748c1fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_faraday, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., ceph=True, version=7, name=rhceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main) Feb 1 04:45:36 localhost systemd[1]: libpod-conmon-b322f0cf8fae886545ef230189cf10c954be561f331441de40d3dc40748c1fd4.scope: Deactivated successfully. Feb 1 04:45:36 localhost systemd[1]: var-lib-containers-storage-overlay-f49ffeb647d38caaddcb8c037d8a58e108e178889d5964c3091d6bc5045890cf-merged.mount: Deactivated successfully. Feb 1 04:45:37 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:37 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.270758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939137270799, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1377, "num_deletes": 250, "total_data_size": 3800561, "memory_usage": 3902408, "flush_reason": "Manual Compaction"} Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939137281919, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 2227760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18375, "largest_seqno": 19747, "table_properties": {"data_size": 2221662, "index_size": 3179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 17003, "raw_average_key_size": 23, "raw_value_size": 2208016, "raw_average_value_size": 2987, "num_data_blocks": 137, "num_entries": 739, "num_filter_entries": 739, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939109, "oldest_key_time": 1769939109, "file_creation_time": 1769939137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11203 microseconds, and 5961 cpu microseconds. Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.281965) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 2227760 bytes OK Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.281987) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.285768) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.285789) EVENT_LOG_v1 {"time_micros": 1769939137285784, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.285811) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3793432, prev total WAL file size 3793756, number of live WAL files 2. Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.286833) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353039' seq:72057594037927935, type:22 .. '6D6772737461740033373630' seq:0, type:0; will stop at (end) Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(2175KB)], [24(18MB)] Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939137286905, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 21622282, "oldest_snapshot_seqno": -1} Feb 1 04:45:37 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 10727 keys, 19443280 bytes, temperature: kUnknown Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939137367725, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 19443280, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19382232, "index_size": 33107, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26885, "raw_key_size": 286365, "raw_average_key_size": 26, "raw_value_size": 19199269, "raw_average_value_size": 1789, "num_data_blocks": 1265, "num_entries": 10727, "num_filter_entries": 10727, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938864, "oldest_key_time": 0, "file_creation_time": 1769939137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.368087) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 19443280 bytes Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.369896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 267.2 rd, 240.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 18.5 +0.0 blob) out(18.5 +0.0 blob), read-write-amplify(18.4) write-amplify(8.7) OK, records in: 11256, records dropped: 529 output_compression: NoCompression Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.369925) EVENT_LOG_v1 {"time_micros": 1769939137369911, "job": 12, "event": "compaction_finished", "compaction_time_micros": 80922, "compaction_time_cpu_micros": 39714, "output_level": 6, "num_output_files": 1, "total_output_size": 19443280, "num_input_records": 11256, "num_output_records": 10727, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939137370453, "job": 12, "event": "table_file_deletion", "file_number": 26} Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939137373223, "job": 12, "event": "table_file_deletion", "file_number": 24} Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.286659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.373270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.373278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.373281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.373289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:37 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:45:37.373293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:45:39 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:39 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:39 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:39 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:45:39 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:39 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:39 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:40 localhost nova_compute[274317]: 2026-02-01 09:45:40.115 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:40 localhost nova_compute[274317]: 2026-02-01 09:45:40.116 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:45:40 localhost nova_compute[274317]: 2026-02-01 09:45:40.116 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:45:40 localhost nova_compute[274317]: 2026-02-01 09:45:40.161 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:45:41 localhost nova_compute[274317]: 2026-02-01 09:45:41.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:41 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:45:41.763 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:45:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:45:41.763 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:45:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:45:41.763 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:45:42 localhost nova_compute[274317]: 2026-02-01 09:45:42.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:42 localhost nova_compute[274317]: 2026-02-01 09:45:42.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:42 localhost nova_compute[274317]: 2026-02-01 09:45:42.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:42 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Feb 1 04:45:42 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/3879925325' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Feb 1 04:45:43 localhost nova_compute[274317]: 2026-02-01 09:45:43.108 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:43 localhost nova_compute[274317]: 2026-02-01 09:45:43.109 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:43 localhost nova_compute[274317]: 2026-02-01 09:45:43.109 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:45:43 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:43 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:44 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.120 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.121 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.121 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.121 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.122 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:45:44 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:45:44 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3970058675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.586 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:45:44 localhost ceph-mon[278949]: Reconfig service osd.default_drive_group Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.826 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.829 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12346MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.829 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.830 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.957 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:45:44 localhost nova_compute[274317]: 2026-02-01 09:45:44.958 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.057 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.130 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.132 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.152 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.189 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.217 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:45:45 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:45 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:45 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:45:45 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1640226999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.637 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:45:45 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:45 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:45 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:45:45 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.645 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.676 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.679 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.679 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.680 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.681 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 1 04:45:45 localhost nova_compute[274317]: 2026-02-01 09:45:45.694 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 1 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:45:45 localhost systemd[1]: tmp-crun.UPZlcn.mount: Deactivated successfully. Feb 1 04:45:45 localhost podman[291405]: 2026-02-01 09:45:45.916103393 +0000 UTC m=+0.122857928 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:45:45 localhost podman[291405]: 2026-02-01 09:45:45.951919773 +0000 UTC m=+0.158674298 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:45:45 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:45:46 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:45:46 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3274300624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:45:46 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:45:46 localhost ceph-mon[278949]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:45:46 localhost nova_compute[274317]: 2026-02-01 09:45:46.690 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:46 localhost nova_compute[274317]: 2026-02-01 09:45:46.713 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:46 localhost nova_compute[274317]: 2026-02-01 09:45:46.714 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:45:47 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:47 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:47 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:45:47 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1618993870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:45:47 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:47 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:47 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:47 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:47 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:45:47 localhost ceph-mon[278949]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:45:47 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e87 e87: 6 total, 6 up, 6 in Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr handle_mgr_map Activating! Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr handle_mgr_map I am now activating Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604211"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604211"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604212"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604212"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604215"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604215"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604215.rwvxvg"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mds metadata", "who": "mds.np0005604215.rwvxvg"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon).mds e16 all = 0 Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604213.jdbvyh"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mds metadata", "who": "mds.np0005604213.jdbvyh"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon).mds e16 all = 0 Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604212.tkdkxt"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mds metadata", "who": "mds.np0005604212.tkdkxt"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon).mds e16 all = 0 Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604215.uhhqtv", "id": "np0005604215.uhhqtv"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr metadata", "who": "np0005604215.uhhqtv", "id": "np0005604215.uhhqtv"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604211.cuflqz", "id": "np0005604211.cuflqz"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr metadata", "who": "np0005604211.cuflqz", "id": "np0005604211.cuflqz"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604209.isqrps", "id": "np0005604209.isqrps"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr metadata", "who": "np0005604209.isqrps", "id": "np0005604209.isqrps"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604213.caiaeh", "id": "np0005604213.caiaeh"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr metadata", "who": "np0005604213.caiaeh", "id": "np0005604213.caiaeh"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mds metadata"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon).mds e16 all = 1 Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata"} : dispatch Feb 1 04:45:48 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Feb 1 04:45:48 localhost systemd[1]: session-68.scope: Deactivated successfully. Feb 1 04:45:48 localhost systemd[1]: session-68.scope: Consumed 10.141s CPU time. Feb 1 04:45:48 localhost systemd-logind[761]: Session 68 logged out. Waiting for processes to exit. Feb 1 04:45:48 localhost systemd-logind[761]: Removed session 68. Feb 1 04:45:48 localhost ceph-mgr[278126]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: balancer Feb 1 04:45:48 localhost ceph-mgr[278126]: [balancer INFO root] Starting Feb 1 04:45:48 localhost ceph-mgr[278126]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:45:48 Feb 1 04:45:48 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:45:48 localhost ceph-mgr[278126]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: cephadm Feb 1 04:45:48 localhost ceph-mgr[278126]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: crash Feb 1 04:45:48 localhost ceph-mgr[278126]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: devicehealth Feb 1 04:45:48 localhost ceph-mgr[278126]: [devicehealth INFO root] Starting Feb 1 04:45:48 localhost ceph-mgr[278126]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: iostat Feb 1 04:45:48 localhost ceph-mgr[278126]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: nfs Feb 1 04:45:48 localhost ceph-mgr[278126]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: orchestrator Feb 1 04:45:48 localhost ceph-mgr[278126]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: pg_autoscaler Feb 1 04:45:48 localhost ceph-mgr[278126]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: progress Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: [progress INFO root] Loading... Feb 1 04:45:48 localhost ceph-mgr[278126]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Feb 1 04:45:48 localhost ceph-mgr[278126]: [progress INFO root] Loaded OSDMap, ready. Feb 1 04:45:48 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] recovery thread starting Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] starting setup Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: rbd_support Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/mirror_snapshot_schedule"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/mirror_snapshot_schedule"} : dispatch Feb 1 04:45:48 localhost ceph-mgr[278126]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: restful Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: status Feb 1 04:45:48 localhost ceph-mgr[278126]: [restful INFO root] server_addr: :: server_port: 8003 Feb 1 04:45:48 localhost ceph-mgr[278126]: [restful WARNING root] server not running: no certificate configured Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Feb 1 04:45:48 localhost ceph-mgr[278126]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: telemetry Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] PerfHandler: starting Feb 1 04:45:48 localhost ceph-mgr[278126]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_task_task: vms, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:45:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:45:48 localhost ceph-mgr[278126]: mgr load Constructed class from module: volumes Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_task_task: volumes, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_task_task: images, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.602+0000 7fce781f3640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.602+0000 7fce781f3640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.602+0000 7fce781f3640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.602+0000 7fce781f3640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.602+0000 7fce781f3640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_task_task: backups, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.606+0000 7fce7b1f9640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.606+0000 7fce7b1f9640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.606+0000 7fce7b1f9640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.606+0000 7fce7b1f9640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:45:48.606+0000 7fce7b1f9640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] TaskHandler: starting Feb 1 04:45:48 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/trash_purge_schedule"} v 0) Feb 1 04:45:48 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/trash_purge_schedule"} : dispatch Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Feb 1 04:45:48 localhost ceph-mgr[278126]: [rbd_support INFO root] setup complete Feb 1 04:45:48 localhost sshd[291564]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:45:48 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:48 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:48 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:48 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' Feb 1 04:45:48 localhost ceph-mon[278949]: from='mgr.26657 172.18.0.106:0/947533198' entity='mgr.np0005604212.oynhpm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:45:48 localhost ceph-mon[278949]: from='client.? 172.18.0.200:0/3579560302' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: Activating manager daemon np0005604215.uhhqtv Feb 1 04:45:48 localhost ceph-mon[278949]: from='client.? 172.18.0.200:0/3579560302' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 1 04:45:48 localhost ceph-mon[278949]: Manager daemon np0005604215.uhhqtv is now available Feb 1 04:45:48 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/mirror_snapshot_schedule"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/mirror_snapshot_schedule"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/trash_purge_schedule"} : dispatch Feb 1 04:45:48 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/trash_purge_schedule"} : dispatch Feb 1 04:45:48 localhost systemd-logind[761]: New session 69 of user ceph-admin. Feb 1 04:45:48 localhost systemd[1]: Started Session 69 of User ceph-admin. Feb 1 04:45:49 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:49 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:49 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 1 04:45:49 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:45:49] ENGINE Bus STARTING Feb 1 04:45:49 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:45:49] ENGINE Bus STARTING Feb 1 04:45:49 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:45:49] ENGINE Serving on https://172.18.0.108:7150 Feb 1 04:45:49 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:45:49] ENGINE Serving on https://172.18.0.108:7150 Feb 1 04:45:49 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:45:49] ENGINE Client ('172.18.0.108', 56488) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:45:49 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:45:49] ENGINE Client ('172.18.0.108', 56488) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:45:49 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:45:49] ENGINE Serving on http://172.18.0.108:8765 Feb 1 04:45:49 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:45:49] ENGINE Serving on http://172.18.0.108:8765 Feb 1 04:45:49 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:45:49] ENGINE Bus STARTED Feb 1 04:45:49 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:45:49] ENGINE Bus STARTED Feb 1 04:45:49 localhost podman[291701]: 2026-02-01 09:45:49.914798465 +0000 UTC m=+0.101345504 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, version=7, vcs-type=git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7) Feb 1 04:45:50 localhost podman[291701]: 2026-02-01 09:45:50.023827588 +0000 UTC m=+0.210374627 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, name=rhceph, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux ) Feb 1 04:45:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 1 04:45:50 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:45:50 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:45:50 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:45:50 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:45:50 localhost ceph-mgr[278126]: [devicehealth INFO root] Check health Feb 1 04:45:50 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:45:50 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:45:50 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:45:50 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:45:51 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:51 localhost ceph-mon[278949]: [01/Feb/2026:09:45:49] ENGINE Bus STARTING Feb 1 04:45:51 localhost ceph-mon[278949]: [01/Feb/2026:09:45:49] ENGINE Serving on https://172.18.0.108:7150 Feb 1 04:45:51 localhost ceph-mon[278949]: [01/Feb/2026:09:45:49] ENGINE Client ('172.18.0.108', 56488) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:45:51 localhost ceph-mon[278949]: [01/Feb/2026:09:45:49] ENGINE Serving on http://172.18.0.108:8765 Feb 1 04:45:51 localhost ceph-mon[278949]: [01/Feb/2026:09:45:49] ENGINE Bus STARTED Feb 1 04:45:51 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 1 04:45:51 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 1 04:45:51 localhost ceph-mon[278949]: Cluster is now healthy Feb 1 04:45:51 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:51 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:51 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:51 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:51 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:51 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:51 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:51 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:51 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 1 04:45:51 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mon.np0005604213 172.18.0.107:0/1601224003; not ready for session (expect reconnect) Feb 1 04:45:51 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:51 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:51 localhost ceph-mgr[278126]: mgr finish mon failed to return metadata for mon.np0005604213: (2) No such file or directory Feb 1 04:45:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:45:51 localhost podman[291917]: 2026-02-01 09:45:51.81299154 +0000 UTC m=+0.086374475 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:45:51 localhost podman[291917]: 2026-02-01 09:45:51.829588029 +0000 UTC m=+0.102970944 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:45:51 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO root] Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO root] Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO root] Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 1 04:45:52 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mon.np0005604213 172.18.0.107:0/1601224003; not ready for session (expect reconnect) Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:52 localhost ceph-mgr[278126]: mgr finish mon failed to return metadata for mon.np0005604213: (2) No such file or directory Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:52 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:52 localhost ceph-mon[278949]: mon.np0005604215@1(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604211"} v 0) Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604211"} : dispatch Feb 1 04:45:52 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:45:52 localhost ceph-mon[278949]: paxos.1).electionLogic(52) init, last seen epoch 52 Feb 1 04:45:53 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:53 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:53 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604212"} v 0) Feb 1 04:45:53 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604212"} : dispatch Feb 1 04:45:53 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:53 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:53 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604215"} v 0) Feb 1 04:45:53 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604215"} : dispatch Feb 1 04:45:53 localhost ceph-mgr[278126]: mgr finish mon failed to return metadata for mon.np0005604213: (22) Invalid argument Feb 1 04:45:53 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:53 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:53 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:53 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:53 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mgr.np0005604212.oynhpm 172.18.0.106:0/3809435654; not ready for session (expect reconnect) Feb 1 04:45:53 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mon.np0005604213 172.18.0.107:0/1601224003; not ready for session (expect reconnect) Feb 1 04:45:53 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:53 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:53 localhost ceph-mgr[278126]: mgr finish mon failed to return metadata for mon.np0005604213: (22) Invalid argument Feb 1 04:45:53 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:53 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:53 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604211.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:53 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604211.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:53 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:53 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:54 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:54 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:54 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:45:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 1 04:45:54 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:54 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:54 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mgr.np0005604212.oynhpm 172.18.0.106:0/3809435654; not ready for session (expect reconnect) Feb 1 04:45:54 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mon.np0005604213 172.18.0.107:0/1601224003; not ready for session (expect reconnect) Feb 1 04:45:54 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:54 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:54 localhost ceph-mgr[278126]: mgr finish mon failed to return metadata for mon.np0005604213: (22) Invalid argument Feb 1 04:45:54 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:54 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:54 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:45:54 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:45:55 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:45:55 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mgr.np0005604212.oynhpm 172.18.0.106:0/3809435654; not ready for session (expect reconnect) Feb 1 04:45:55 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mon.np0005604213 172.18.0.107:0/1601224003; not ready for session (expect reconnect) Feb 1 04:45:55 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:55 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:55 localhost ceph-mgr[278126]: mgr finish mon failed to return metadata for mon.np0005604213: (22) Invalid argument Feb 1 04:45:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Feb 1 04:45:56 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mgr.np0005604212.oynhpm 172.18.0.106:0/3809435654; not ready for session (expect reconnect) Feb 1 04:45:56 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mon.np0005604213 172.18.0.107:0/1601224003; not ready for session (expect reconnect) Feb 1 04:45:56 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:56 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:56 localhost ceph-mgr[278126]: mgr finish mon failed to return metadata for mon.np0005604213: (22) Invalid argument Feb 1 04:45:57 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mgr.np0005604212.oynhpm 172.18.0.106:0/3809435654; not ready for session (expect reconnect) Feb 1 04:45:57 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mon.np0005604213 172.18.0.107:0/1601224003; not ready for session (expect reconnect) Feb 1 04:45:57 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:57 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:57 localhost ceph-mgr[278126]: mgr finish mon failed to return metadata for mon.np0005604213: (22) Invalid argument Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604212 calling monitor election Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:58 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604211 is new leader, mons np0005604211,np0005604215,np0005604212 in quorum (ranks 0,1,2) Feb 1 04:45:58 localhost ceph-mon[278949]: Health check failed: 1/4 mons down, quorum np0005604211,np0005604215,np0005604212 (MON_DOWN) Feb 1 04:45:58 localhost ceph-mon[278949]: Health detail: HEALTH_WARN 1/4 mons down, quorum np0005604211,np0005604215,np0005604212 Feb 1 04:45:58 localhost ceph-mon[278949]: [WRN] MON_DOWN: 1/4 mons down, quorum np0005604211,np0005604215,np0005604212 Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604213 (rank 3) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Feb 1 04:45:58 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:58 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604212.oynhpm", "id": "np0005604212.oynhpm"} v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr metadata", "who": "np0005604212.oynhpm", "id": "np0005604212.oynhpm"} : dispatch Feb 1 04:45:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 0 B/s wr, 13 op/s Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:45:58 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 8b45aa57-831a-4f08-88ca-febe5b017770 (Updating node-proxy deployment (+4 -> 4)) Feb 1 04:45:58 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 8b45aa57-831a-4f08-88ca-febe5b017770 (Updating node-proxy deployment (+4 -> 4)) Feb 1 04:45:58 localhost ceph-mgr[278126]: [progress INFO root] Completed event 8b45aa57-831a-4f08-88ca-febe5b017770 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:45:58 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:45:58 localhost ceph-mon[278949]: paxos.1).electionLogic(54) init, last seen epoch 54 Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:58 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:45:58 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mon.np0005604213 172.18.0.107:0/1601224003; not ready for session (expect reconnect) Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:58 localhost ceph-mgr[278126]: mgr finish mon failed to return metadata for mon.np0005604213: (22) Invalid argument Feb 1 04:45:58 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:45:58 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr services"} : dispatch Feb 1 04:45:58 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:45:58 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:45:58 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:45:58 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604213 calling monitor election Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604213 calling monitor election Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604211 calling monitor election Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604211 is new leader, mons np0005604211,np0005604215,np0005604212,np0005604213 in quorum (ranks 0,1,2,3) Feb 1 04:45:59 localhost ceph-mon[278949]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005604211,np0005604215,np0005604212) Feb 1 04:45:59 localhost ceph-mon[278949]: Cluster is now healthy Feb 1 04:45:59 localhost ceph-mon[278949]: overall HEALTH_OK Feb 1 04:45:59 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:45:59 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:45:59 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:45:59 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mon.np0005604213 172.18.0.107:0/1601224003; not ready for session (expect reconnect) Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:45:59 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:45:59 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:45:59 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 1 04:45:59 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:45:59 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:45:59 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:45:59 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:45:59 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:46:00 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.34525 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 1 04:46:00 localhost podman[236852]: time="2026-02-01T09:46:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:46:00 localhost podman[236852]: @ - - [01/Feb/2026:09:46:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:46:00 localhost podman[236852]: @ - - [01/Feb/2026:09:46:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17793 "" "Go-http-client/1.1" Feb 1 04:46:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Feb 1 04:46:00 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:46:00 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:46:00 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:00 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:00 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:00 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:00 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:46:00 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:46:00 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:46:00 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:46:00 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 1 04:46:00 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:00 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:00 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:00 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:46:00 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:46:00 localhost ceph-mgr[278126]: mgr.server handle_report got status from non-daemon mon.np0005604213 Feb 1 04:46:00 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:00.549+0000 7fcea83d3640 -1 mgr.server handle_report got status from non-daemon mon.np0005604213 Feb 1 04:46:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:46:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:46:00 localhost podman[292636]: 2026-02-01 09:46:00.863060971 +0000 UTC m=+0.076117593 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:46:00 localhost podman[292636]: 2026-02-01 09:46:00.927644933 +0000 UTC m=+0.140701495 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:46:00 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:46:00 localhost podman[292637]: 2026-02-01 09:46:00.929405239 +0000 UTC m=+0.136202996 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:46:01 localhost podman[292637]: 2026-02-01 09:46:01.009494155 +0000 UTC m=+0.216291942 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:46:01 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:46:01 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:01 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:01 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Feb 1 04:46:01 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Feb 1 04:46:01 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Feb 1 04:46:01 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:46:01 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:01 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:01 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:46:01 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:46:01 localhost ceph-mon[278949]: Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:46:01 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:46:01 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:01 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:01 localhost ceph-mon[278949]: Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:46:01 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:01 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:01 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:46:01 localhost ceph-mon[278949]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 1 04:46:01 localhost ceph-mon[278949]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 1 04:46:01 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:01 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:01 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:46:01 localhost openstack_network_exporter[239388]: ERROR 09:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:46:01 localhost openstack_network_exporter[239388]: Feb 1 04:46:01 localhost openstack_network_exporter[239388]: ERROR 09:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:46:01 localhost openstack_network_exporter[239388]: Feb 1 04:46:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 1 04:46:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.34533 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Feb 1 04:46:02 localhost ceph-mgr[278126]: [cephadm INFO root] Saving service mon spec with placement label:mon Feb 1 04:46:02 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Feb 1 04:46:02 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 1 04:46:02 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:02 localhost ceph-mon[278949]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:46:02 localhost ceph-mon[278949]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:46:02 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:02 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Feb 1 04:46:02 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Feb 1 04:46:02 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Feb 1 04:46:02 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:46:02 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:02 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:02 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:46:02 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:46:03 localhost ceph-mon[278949]: Saving service mon spec with placement label:mon Feb 1 04:46:03 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:03 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:03 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:03 localhost ceph-mon[278949]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:46:03 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:46:03 localhost ceph-mon[278949]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:46:03 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:03 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:03 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:46:03 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:46:03 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 1 04:46:03 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:03 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:03 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:03 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:46:03 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:46:03 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.54103 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005604213", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 1 04:46:04 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 1 04:46:04 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:04 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:04 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:46:04 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:46:04 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 1 04:46:04 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:04 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 1 04:46:04 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr services"} : dispatch Feb 1 04:46:04 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:04 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:04 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:46:04 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:46:04 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:04 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:04 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:46:04 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:04 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:04 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:46:04 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:04 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:04 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:46:04 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:04 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:04 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:46:05 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:05 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:05 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005604212 (monmap changed)... Feb 1 04:46:05 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005604212 (monmap changed)... Feb 1 04:46:05 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 1 04:46:05 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:46:05 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 1 04:46:05 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 1 04:46:05 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:05 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:05 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:46:05 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:46:06 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:06 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:06 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:46:06 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:46:06 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 1 04:46:06 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:06 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:06 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:06 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:46:06 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:46:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 1 04:46:06 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:06 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:06 localhost ceph-mon[278949]: Reconfiguring mon.np0005604212 (monmap changed)... Feb 1 04:46:06 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:46:06 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:46:06 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:06 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:06 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:06 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:46:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:46:06 localhost podman[292683]: 2026-02-01 09:46:06.826587048 +0000 UTC m=+0.083092163 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, org.opencontainers.image.created=2026-01-22T05:09:47Z, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, release=1769056855, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, version=9.7, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal) Feb 1 04:46:06 localhost podman[292683]: 2026-02-01 09:46:06.84168197 +0000 UTC m=+0.098187085 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7) Feb 1 04:46:06 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:46:06 localhost podman[292684]: 2026-02-01 09:46:06.929780718 +0000 UTC m=+0.183174356 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:46:06 localhost podman[292684]: 2026-02-01 09:46:06.935053893 +0000 UTC m=+0.188447551 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Feb 1 04:46:06 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:46:07 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:07 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:07 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Feb 1 04:46:07 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Feb 1 04:46:07 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Feb 1 04:46:07 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:46:07 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:07 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:07 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:46:07 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:46:07 localhost ceph-mon[278949]: Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:46:07 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:46:07 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:07 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:07 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:46:08 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:08 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:08 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:08 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:08 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Feb 1 04:46:08 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Feb 1 04:46:08 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Feb 1 04:46:08 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:46:08 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:08 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:08 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:46:08 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:46:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 1 04:46:08 localhost ceph-mon[278949]: Reconfiguring osd.0 (monmap changed)... Feb 1 04:46:08 localhost ceph-mon[278949]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:46:08 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:08 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:08 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:08 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:08 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:46:09 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:09 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:09 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:09 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:09 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:09 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:46:09 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:46:09 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 1 04:46:09 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:09 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:09 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:09 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:46:09 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:46:09 localhost ceph-mon[278949]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:46:09 localhost ceph-mon[278949]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:46:09 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:09 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:09 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:09 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:09 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:09 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:10 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:10 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:10 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:46:10 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:46:10 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 1 04:46:10 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:10 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 1 04:46:10 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr services"} : dispatch Feb 1 04:46:10 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:10 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:10 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:46:10 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:46:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 1 04:46:10 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:46:10 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:46:10 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:10 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:10 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:10 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:10 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:11 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:11 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005604213 (monmap changed)... Feb 1 04:46:11 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005604213 (monmap changed)... Feb 1 04:46:11 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 1 04:46:11 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:46:11 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 1 04:46:11 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 1 04:46:11 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:46:11 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:46:11 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:46:11 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:46:11 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:46:11 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:46:11 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:11 localhost ceph-mon[278949]: from='mgr.26672 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:46:11 localhost ceph-mon[278949]: from='mgr.26672 172.18.0.108:0/2709119860' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:46:11 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e88 e88: 6 total, 6 up, 6 in Feb 1 04:46:12 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:12.014+0000 7fcf04926640 -1 mgr handle_mgr_map I was active but no longer am Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr handle_mgr_map I was active but no longer am Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn e: '/usr/bin/ceph-mgr' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 0: '/usr/bin/ceph-mgr' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 1: '-n' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 2: 'mgr.np0005604215.uhhqtv' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 3: '-f' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 4: '--setuser' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 5: 'ceph' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 6: '--setgroup' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 7: 'ceph' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 8: '--default-log-to-file=false' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 9: '--default-log-to-journald=true' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn 10: '--default-log-to-stderr=false' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn respawning with exe /usr/bin/ceph-mgr Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr respawn exe_path /proc/self/exe Feb 1 04:46:12 localhost systemd[1]: session-69.scope: Deactivated successfully. Feb 1 04:46:12 localhost systemd[1]: session-69.scope: Consumed 5.763s CPU time. Feb 1 04:46:12 localhost systemd-logind[761]: Session 69 logged out. Waiting for processes to exit. Feb 1 04:46:12 localhost systemd-logind[761]: Removed session 69. Feb 1 04:46:12 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: ignoring --setuser ceph since I am not root Feb 1 04:46:12 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: ignoring --setgroup ceph since I am not root Feb 1 04:46:12 localhost ceph-mgr[278126]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Feb 1 04:46:12 localhost ceph-mgr[278126]: pidfile_write: ignore empty --pid-file Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr[py] Loading python module 'alerts' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr[py] Loading python module 'balancer' Feb 1 04:46:12 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:12.205+0000 7f947e9a3140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr[py] Loading python module 'cephadm' Feb 1 04:46:12 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:12.271+0000 7f947e9a3140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 1 04:46:12 localhost sshd[292745]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:46:12 localhost ceph-mon[278949]: Reconfiguring mon.np0005604213 (monmap changed)... Feb 1 04:46:12 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:46:12 localhost ceph-mon[278949]: from='client.? 172.18.0.200:0/1843935985' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:46:12 localhost ceph-mon[278949]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:46:12 localhost ceph-mon[278949]: Activating manager daemon np0005604211.cuflqz Feb 1 04:46:12 localhost ceph-mon[278949]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 1 04:46:12 localhost ceph-mon[278949]: Manager daemon np0005604211.cuflqz is now available Feb 1 04:46:12 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604211.cuflqz/mirror_snapshot_schedule"} : dispatch Feb 1 04:46:12 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604211.cuflqz/trash_purge_schedule"} : dispatch Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.657685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939172657727, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1766, "num_deletes": 253, "total_data_size": 7165358, "memory_usage": 7353568, "flush_reason": "Manual Compaction"} Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939172683856, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 4271920, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19752, "largest_seqno": 21513, "table_properties": {"data_size": 4264410, "index_size": 4207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 19590, "raw_average_key_size": 21, "raw_value_size": 4247752, "raw_average_value_size": 4740, "num_data_blocks": 177, "num_entries": 896, "num_filter_entries": 896, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939137, "oldest_key_time": 1769939137, "file_creation_time": 1769939172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 26298 microseconds, and 9333 cpu microseconds. Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.683979) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 4271920 bytes OK Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.684033) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.686059) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.686085) EVENT_LOG_v1 {"time_micros": 1769939172686078, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.686109) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7156247, prev total WAL file size 7156247, number of live WAL files 2. Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.688369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323630' seq:72057594037927935, type:22 .. '6B760031353132' seq:0, type:0; will stop at (end) Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(4171KB)], [27(18MB)] Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939172688426, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 23715200, "oldest_snapshot_seqno": -1} Feb 1 04:46:12 localhost systemd-logind[761]: New session 70 of user ceph-admin. Feb 1 04:46:12 localhost systemd[1]: Started Session 70 of User ceph-admin. Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11111 keys, 22775032 bytes, temperature: kUnknown Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939172811470, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 22775032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 22711025, "index_size": 35106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27845, "raw_key_size": 297350, "raw_average_key_size": 26, "raw_value_size": 22520845, "raw_average_value_size": 2026, "num_data_blocks": 1333, "num_entries": 11111, "num_filter_entries": 11111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938864, "oldest_key_time": 0, "file_creation_time": 1769939172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.811722) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 22775032 bytes Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.815981) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.6 rd, 185.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 18.5 +0.0 blob) out(21.7 +0.0 blob), read-write-amplify(10.9) write-amplify(5.3) OK, records in: 11623, records dropped: 512 output_compression: NoCompression Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.815998) EVENT_LOG_v1 {"time_micros": 1769939172815991, "job": 14, "event": "compaction_finished", "compaction_time_micros": 123121, "compaction_time_cpu_micros": 27234, "output_level": 6, "num_output_files": 1, "total_output_size": 22775032, "num_input_records": 11623, "num_output_records": 11111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939172816370, "job": 14, "event": "table_file_deletion", "file_number": 29} Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939172817695, "job": 14, "event": "table_file_deletion", "file_number": 27} Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.687783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.817760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.817766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.817768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.817770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:12 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:12.817772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr[py] Loading python module 'crash' Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr[py] Module crash has missing NOTIFY_TYPES member Feb 1 04:46:12 localhost ceph-mgr[278126]: mgr[py] Loading python module 'dashboard' Feb 1 04:46:12 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:12.945+0000 7f947e9a3140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Loading python module 'devicehealth' Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Loading python module 'diskprediction_local' Feb 1 04:46:13 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:13.539+0000 7f947e9a3140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 1 04:46:13 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Feb 1 04:46:13 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Feb 1 04:46:13 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: from numpy import show_config as show_numpy_config Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 1 04:46:13 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:13.672+0000 7f947e9a3140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Loading python module 'influx' Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Module influx has missing NOTIFY_TYPES member Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Loading python module 'insights' Feb 1 04:46:13 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:13.730+0000 7f947e9a3140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Feb 1 04:46:13 localhost podman[292863]: 2026-02-01 09:46:13.757552768 +0000 UTC m=+0.100613361 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, com.redhat.component=rhceph-container, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, ceph=True, io.buildah.version=1.41.4, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Loading python module 'iostat' Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 1 04:46:13 localhost ceph-mgr[278126]: mgr[py] Loading python module 'k8sevents' Feb 1 04:46:13 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:13.841+0000 7f947e9a3140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 1 04:46:13 localhost podman[292863]: 2026-02-01 09:46:13.858756416 +0000 UTC m=+0.201817009 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, architecture=x86_64) Feb 1 04:46:14 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'localpool' Feb 1 04:46:14 localhost ceph-mon[278949]: [01/Feb/2026:09:46:13] ENGINE Bus STARTING Feb 1 04:46:14 localhost ceph-mon[278949]: [01/Feb/2026:09:46:13] ENGINE Serving on http://172.18.0.105:8765 Feb 1 04:46:14 localhost ceph-mon[278949]: [01/Feb/2026:09:46:13] ENGINE Serving on https://172.18.0.105:7150 Feb 1 04:46:14 localhost ceph-mon[278949]: [01/Feb/2026:09:46:13] ENGINE Bus STARTED Feb 1 04:46:14 localhost ceph-mon[278949]: [01/Feb/2026:09:46:13] ENGINE Client ('172.18.0.105', 33394) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'mds_autoscaler' Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.292512) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939174292933, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 297, "num_deletes": 251, "total_data_size": 935403, "memory_usage": 953808, "flush_reason": "Manual Compaction"} Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939174299586, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 622124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21518, "largest_seqno": 21810, "table_properties": {"data_size": 620159, "index_size": 204, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5516, "raw_average_key_size": 19, "raw_value_size": 616084, "raw_average_value_size": 2200, "num_data_blocks": 10, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939173, "oldest_key_time": 1769939173, "file_creation_time": 1769939174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 7392 microseconds, and 3615 cpu microseconds. Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.299905) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 622124 bytes OK Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.300068) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.302223) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.302249) EVENT_LOG_v1 {"time_micros": 1769939174302243, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.302272) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 933193, prev total WAL file size 949465, number of live WAL files 2. Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.306483) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(607KB)], [30(21MB)] Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939174306536, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 23397156, "oldest_snapshot_seqno": -1} Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'mirroring' Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 10876 keys, 19748803 bytes, temperature: kUnknown Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939174405619, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 19748803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19688809, "index_size": 31733, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27205, "raw_key_size": 292945, "raw_average_key_size": 26, "raw_value_size": 19504975, "raw_average_value_size": 1793, "num_data_blocks": 1188, "num_entries": 10876, "num_filter_entries": 10876, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769938864, "oldest_key_time": 0, "file_creation_time": 1769939174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9a40fa3-7e53-4325-8a76-a86e4a0fff5d", "db_session_id": "7PKSWXLLH9M8NB5FULPW", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.406054) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 19748803 bytes Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.411411) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.9 rd, 199.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 21.7 +0.0 blob) out(18.8 +0.0 blob), read-write-amplify(69.4) write-amplify(31.7) OK, records in: 11391, records dropped: 515 output_compression: NoCompression Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.411452) EVENT_LOG_v1 {"time_micros": 1769939174411434, "job": 16, "event": "compaction_finished", "compaction_time_micros": 99201, "compaction_time_cpu_micros": 38956, "output_level": 6, "num_output_files": 1, "total_output_size": 19748803, "num_input_records": 11391, "num_output_records": 10876, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939174412435, "job": 16, "event": "table_file_deletion", "file_number": 32} Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939174417327, "job": 16, "event": "table_file_deletion", "file_number": 30} Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.306413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.417578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.417585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.417587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.417589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:14 localhost ceph-mon[278949]: rocksdb: (Original Log Time 2026/02/01-09:46:14.417591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'nfs' Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'orchestrator' Feb 1 04:46:14 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:14.584+0000 7f947e9a3140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'osd_perf_query' Feb 1 04:46:14 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:14.725+0000 7f947e9a3140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'osd_support' Feb 1 04:46:14 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:14.788+0000 7f947e9a3140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'pg_autoscaler' Feb 1 04:46:14 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:14.842+0000 7f947e9a3140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'progress' Feb 1 04:46:14 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:14.906+0000 7f947e9a3140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Module progress has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:14.965+0000 7f947e9a3140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Feb 1 04:46:14 localhost ceph-mgr[278126]: mgr[py] Loading python module 'prometheus' Feb 1 04:46:15 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 1 04:46:15 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 1 04:46:15 localhost ceph-mon[278949]: Cluster is now healthy Feb 1 04:46:15 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:15 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:15 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:15 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:15 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:15 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:15 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:15 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:15 localhost ceph-mgr[278126]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 1 04:46:15 localhost ceph-mgr[278126]: mgr[py] Loading python module 'rbd_support' Feb 1 04:46:15 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:15.255+0000 7f947e9a3140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 1 04:46:15 localhost ceph-mgr[278126]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 1 04:46:15 localhost ceph-mgr[278126]: mgr[py] Loading python module 'restful' Feb 1 04:46:15 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:15.336+0000 7f947e9a3140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 1 04:46:15 localhost ceph-mgr[278126]: mgr[py] Loading python module 'rgw' Feb 1 04:46:15 localhost ceph-mgr[278126]: mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 1 04:46:15 localhost ceph-mgr[278126]: mgr[py] Loading python module 'rook' Feb 1 04:46:15 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:15.656+0000 7f947e9a3140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Module rook has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Loading python module 'selftest' Feb 1 04:46:16 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:16.068+0000 7f947e9a3140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Loading python module 'snap_schedule' Feb 1 04:46:16 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:16.127+0000 7f947e9a3140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Loading python module 'stats' Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Loading python module 'status' Feb 1 04:46:16 localhost podman[293125]: 2026-02-01 09:46:16.265937675 +0000 UTC m=+0.145106853 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:46:16 localhost podman[293125]: 2026-02-01 09:46:16.301063355 +0000 UTC m=+0.180232563 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:46:16 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:46:16 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd/host:np0005604211", "name": "osd_memory_target"} : dispatch Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:46:16 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:46:16 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:46:16 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:46:16 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:46:16 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:46:16 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:46:16 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:46:16 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:46:16 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:46:16 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Module status has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Loading python module 'telegraf' Feb 1 04:46:16 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:16.312+0000 7f947e9a3140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Loading python module 'telemetry' Feb 1 04:46:16 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:16.369+0000 7f947e9a3140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Loading python module 'test_orchestrator' Feb 1 04:46:16 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:16.498+0000 7f947e9a3140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Loading python module 'volumes' Feb 1 04:46:16 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:16.640+0000 7f947e9a3140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Loading python module 'zabbix' Feb 1 04:46:16 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:16.824+0000 7f947e9a3140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:46:16.881+0000 7f947e9a3140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 1 04:46:16 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55d1775411e0 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Feb 1 04:46:16 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.105:6800/155238379 Feb 1 04:46:17 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:46:17 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:46:17 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:46:17 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:46:18 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:46:18 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:46:18 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:46:18 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:46:18 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:46:18 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:46:18 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:46:18 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:46:19 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:19 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:46:20 localhost ceph-mon[278949]: Reconfiguring mon.np0005604211 (monmap changed)... Feb 1 04:46:20 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604211 on np0005604211.localdomain Feb 1 04:46:20 localhost ceph-mon[278949]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 1 04:46:20 localhost ceph-mon[278949]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 1 04:46:20 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:20 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:20 localhost ceph-mon[278949]: Reconfiguring mon.np0005604213 (monmap changed)... Feb 1 04:46:20 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:46:20 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:46:21 localhost podman[293834]: Feb 1 04:46:21 localhost podman[293834]: 2026-02-01 09:46:21.568571589 +0000 UTC m=+0.056952343 container create 98950d09a78752b281c28ebedddf5a4f4059657c22fcc2fdf9fba3eccd192418 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_lumiere, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=1764794109, vcs-type=git, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, version=7, io.buildah.version=1.41.4, GIT_BRANCH=main, CEPH_POINT_RELEASE=, ceph=True, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:46:21 localhost systemd[1]: Started libpod-conmon-98950d09a78752b281c28ebedddf5a4f4059657c22fcc2fdf9fba3eccd192418.scope. Feb 1 04:46:21 localhost systemd[1]: Started libcrun container. Feb 1 04:46:21 localhost podman[293834]: 2026-02-01 09:46:21.632249343 +0000 UTC m=+0.120630087 container init 98950d09a78752b281c28ebedddf5a4f4059657c22fcc2fdf9fba3eccd192418 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_lumiere, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, RELEASE=main, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:46:21 localhost podman[293834]: 2026-02-01 09:46:21.538399304 +0000 UTC m=+0.026780078 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:21 localhost podman[293834]: 2026-02-01 09:46:21.642040019 +0000 UTC m=+0.130420773 container start 98950d09a78752b281c28ebedddf5a4f4059657c22fcc2fdf9fba3eccd192418 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_lumiere, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1764794109, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc.) Feb 1 04:46:21 localhost podman[293834]: 2026-02-01 09:46:21.642281697 +0000 UTC m=+0.130662441 container attach 98950d09a78752b281c28ebedddf5a4f4059657c22fcc2fdf9fba3eccd192418 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_lumiere, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git) Feb 1 04:46:21 localhost sad_lumiere[293850]: 167 167 Feb 1 04:46:21 localhost systemd[1]: libpod-98950d09a78752b281c28ebedddf5a4f4059657c22fcc2fdf9fba3eccd192418.scope: Deactivated successfully. Feb 1 04:46:21 localhost podman[293834]: 2026-02-01 09:46:21.645979813 +0000 UTC m=+0.134360587 container died 98950d09a78752b281c28ebedddf5a4f4059657c22fcc2fdf9fba3eccd192418 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_lumiere, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.component=rhceph-container, release=1764794109, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, version=7, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:46:21 localhost podman[293855]: 2026-02-01 09:46:21.739421778 +0000 UTC m=+0.085246569 container remove 98950d09a78752b281c28ebedddf5a4f4059657c22fcc2fdf9fba3eccd192418 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_lumiere, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1764794109, name=rhceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, version=7, vcs-type=git, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:46:21 localhost systemd[1]: libpod-conmon-98950d09a78752b281c28ebedddf5a4f4059657c22fcc2fdf9fba3eccd192418.scope: Deactivated successfully. Feb 1 04:46:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:46:22 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:22 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:22 localhost ceph-mon[278949]: Reconfiguring crash.np0005604215 (monmap changed)... Feb 1 04:46:22 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:22 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604215 on np0005604215.localdomain Feb 1 04:46:22 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:22 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:22 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 1 04:46:22 localhost podman[293891]: 2026-02-01 09:46:22.029985534 +0000 UTC m=+0.085318892 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:46:22 localhost podman[293891]: 2026-02-01 09:46:22.044712655 +0000 UTC m=+0.100045993 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:46:22 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:46:22 localhost podman[293950]: Feb 1 04:46:22 localhost podman[293950]: 2026-02-01 09:46:22.437811772 +0000 UTC m=+0.072324495 container create 9057ef178b0eaf3fb98e297138d954d5b409604b998470d3bb090718061315e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_archimedes, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, ceph=True, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, name=rhceph, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, description=Red Hat Ceph Storage 7) Feb 1 04:46:22 localhost systemd[1]: Started libpod-conmon-9057ef178b0eaf3fb98e297138d954d5b409604b998470d3bb090718061315e1.scope. Feb 1 04:46:22 localhost systemd[1]: Started libcrun container. Feb 1 04:46:22 localhost podman[293950]: 2026-02-01 09:46:22.499901686 +0000 UTC m=+0.134414369 container init 9057ef178b0eaf3fb98e297138d954d5b409604b998470d3bb090718061315e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_archimedes, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:46:22 localhost podman[293950]: 2026-02-01 09:46:22.508428863 +0000 UTC m=+0.142941556 container start 9057ef178b0eaf3fb98e297138d954d5b409604b998470d3bb090718061315e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_archimedes, maintainer=Guillaume Abrioux , RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4) Feb 1 04:46:22 localhost podman[293950]: 2026-02-01 09:46:22.409520476 +0000 UTC m=+0.044033209 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:22 localhost podman[293950]: 2026-02-01 09:46:22.508736222 +0000 UTC m=+0.143248945 container attach 9057ef178b0eaf3fb98e297138d954d5b409604b998470d3bb090718061315e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_archimedes, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Feb 1 04:46:22 localhost serene_archimedes[293965]: 167 167 Feb 1 04:46:22 localhost systemd[1]: libpod-9057ef178b0eaf3fb98e297138d954d5b409604b998470d3bb090718061315e1.scope: Deactivated successfully. Feb 1 04:46:22 localhost podman[293950]: 2026-02-01 09:46:22.511691235 +0000 UTC m=+0.146203918 container died 9057ef178b0eaf3fb98e297138d954d5b409604b998470d3bb090718061315e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_archimedes, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:46:22 localhost systemd[1]: var-lib-containers-storage-overlay-55655cd690fecfa1d4e0404d797cbebf575f77fd9568a992fbad2b73b46c62e0-merged.mount: Deactivated successfully. Feb 1 04:46:22 localhost systemd[1]: var-lib-containers-storage-overlay-7355f4304d8145a5ffd2a4c7ab5e9c091c729df4505fb0e16731f5c869b96903-merged.mount: Deactivated successfully. Feb 1 04:46:22 localhost podman[293970]: 2026-02-01 09:46:22.606889205 +0000 UTC m=+0.086275622 container remove 9057ef178b0eaf3fb98e297138d954d5b409604b998470d3bb090718061315e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_archimedes, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, vcs-type=git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, ceph=True, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=) Feb 1 04:46:22 localhost systemd[1]: libpod-conmon-9057ef178b0eaf3fb98e297138d954d5b409604b998470d3bb090718061315e1.scope: Deactivated successfully. Feb 1 04:46:23 localhost ceph-mon[278949]: Reconfiguring osd.2 (monmap changed)... Feb 1 04:46:23 localhost ceph-mon[278949]: Reconfiguring daemon osd.2 on np0005604215.localdomain Feb 1 04:46:23 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:23 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:23 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:23 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:23 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:23 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 1 04:46:23 localhost podman[294047]: Feb 1 04:46:23 localhost podman[294047]: 2026-02-01 09:46:23.433918976 +0000 UTC m=+0.075704641 container create 22f6591f5e1ab28a726996327dac521bc09600b560b999ced150ae556b94e112 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_maxwell, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, name=rhceph, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, release=1764794109) Feb 1 04:46:23 localhost systemd[1]: Started libpod-conmon-22f6591f5e1ab28a726996327dac521bc09600b560b999ced150ae556b94e112.scope. Feb 1 04:46:23 localhost systemd[1]: Started libcrun container. Feb 1 04:46:23 localhost podman[294047]: 2026-02-01 09:46:23.492480649 +0000 UTC m=+0.134266324 container init 22f6591f5e1ab28a726996327dac521bc09600b560b999ced150ae556b94e112 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_maxwell, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-type=git, architecture=x86_64, build-date=2025-12-08T17:28:53Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:46:23 localhost podman[294047]: 2026-02-01 09:46:23.501562793 +0000 UTC m=+0.143348468 container start 22f6591f5e1ab28a726996327dac521bc09600b560b999ced150ae556b94e112 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_maxwell, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=1764794109) Feb 1 04:46:23 localhost podman[294047]: 2026-02-01 09:46:23.50177286 +0000 UTC m=+0.143558525 container attach 22f6591f5e1ab28a726996327dac521bc09600b560b999ced150ae556b94e112 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_maxwell, release=1764794109, maintainer=Guillaume Abrioux , vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=7, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=) Feb 1 04:46:23 localhost stoic_maxwell[294062]: 167 167 Feb 1 04:46:23 localhost systemd[1]: libpod-22f6591f5e1ab28a726996327dac521bc09600b560b999ced150ae556b94e112.scope: Deactivated successfully. Feb 1 04:46:23 localhost podman[294047]: 2026-02-01 09:46:23.404389772 +0000 UTC m=+0.046175437 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:23 localhost podman[294047]: 2026-02-01 09:46:23.504432193 +0000 UTC m=+0.146217868 container died 22f6591f5e1ab28a726996327dac521bc09600b560b999ced150ae556b94e112 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_maxwell, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main) Feb 1 04:46:23 localhost systemd[1]: var-lib-containers-storage-overlay-c6e646e99461bd97fd31ba8fe7b9c7950b64b59f9a3df4dd987b2f788144cea4-merged.mount: Deactivated successfully. Feb 1 04:46:23 localhost podman[294067]: 2026-02-01 09:46:23.600747109 +0000 UTC m=+0.088079629 container remove 22f6591f5e1ab28a726996327dac521bc09600b560b999ced150ae556b94e112 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_maxwell, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 1 04:46:23 localhost systemd[1]: libpod-conmon-22f6591f5e1ab28a726996327dac521bc09600b560b999ced150ae556b94e112.scope: Deactivated successfully. Feb 1 04:46:24 localhost ceph-mon[278949]: Reconfiguring osd.5 (monmap changed)... Feb 1 04:46:24 localhost ceph-mon[278949]: Reconfiguring daemon osd.5 on np0005604215.localdomain Feb 1 04:46:24 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:24 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:24 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:24 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:24 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:24 localhost ceph-mon[278949]: mon.np0005604215@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:24 localhost podman[294142]: Feb 1 04:46:24 localhost podman[294142]: 2026-02-01 09:46:24.434142749 +0000 UTC m=+0.077301380 container create 1de7e6cf76dcd093f1b04eb9b4aa9099da292a89196c98eacf4e07b60ad8e7c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_einstein, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, ceph=True, release=1764794109, architecture=x86_64, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git) Feb 1 04:46:24 localhost systemd[1]: Started libpod-conmon-1de7e6cf76dcd093f1b04eb9b4aa9099da292a89196c98eacf4e07b60ad8e7c5.scope. Feb 1 04:46:24 localhost systemd[1]: Started libcrun container. Feb 1 04:46:24 localhost podman[294142]: 2026-02-01 09:46:24.492890348 +0000 UTC m=+0.136048919 container init 1de7e6cf76dcd093f1b04eb9b4aa9099da292a89196c98eacf4e07b60ad8e7c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_einstein, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, distribution-scope=public, release=1764794109, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True) Feb 1 04:46:24 localhost podman[294142]: 2026-02-01 09:46:24.501222819 +0000 UTC m=+0.144381390 container start 1de7e6cf76dcd093f1b04eb9b4aa9099da292a89196c98eacf4e07b60ad8e7c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_einstein, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, version=7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, architecture=x86_64) Feb 1 04:46:24 localhost podman[294142]: 2026-02-01 09:46:24.50157031 +0000 UTC m=+0.144728951 container attach 1de7e6cf76dcd093f1b04eb9b4aa9099da292a89196c98eacf4e07b60ad8e7c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_einstein, release=1764794109, vcs-type=git, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, ceph=True, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:46:24 localhost frosty_einstein[294158]: 167 167 Feb 1 04:46:24 localhost systemd[1]: libpod-1de7e6cf76dcd093f1b04eb9b4aa9099da292a89196c98eacf4e07b60ad8e7c5.scope: Deactivated successfully. Feb 1 04:46:24 localhost podman[294142]: 2026-02-01 09:46:24.403798229 +0000 UTC m=+0.046956850 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:24 localhost podman[294142]: 2026-02-01 09:46:24.50413546 +0000 UTC m=+0.147294051 container died 1de7e6cf76dcd093f1b04eb9b4aa9099da292a89196c98eacf4e07b60ad8e7c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_einstein, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main) Feb 1 04:46:24 localhost systemd[1]: var-lib-containers-storage-overlay-182c22d55c8d3da1a484822226743b2b5da0237dc7a0e9f0b6834c12333edfc4-merged.mount: Deactivated successfully. Feb 1 04:46:24 localhost podman[294163]: 2026-02-01 09:46:24.603428679 +0000 UTC m=+0.089208934 container remove 1de7e6cf76dcd093f1b04eb9b4aa9099da292a89196c98eacf4e07b60ad8e7c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_einstein, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_CLEAN=True, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:46:24 localhost systemd[1]: libpod-conmon-1de7e6cf76dcd093f1b04eb9b4aa9099da292a89196c98eacf4e07b60ad8e7c5.scope: Deactivated successfully. Feb 1 04:46:25 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:46:25 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:46:25 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:25 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:25 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:25 localhost podman[294233]: Feb 1 04:46:25 localhost podman[294233]: 2026-02-01 09:46:25.26052094 +0000 UTC m=+0.066570376 container create 3c92c8dbf22e88f763f483ea9e812282bbdd295d298cee569076156f37048eea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_chandrasekhar, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, ceph=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:46:25 localhost systemd[1]: Started libpod-conmon-3c92c8dbf22e88f763f483ea9e812282bbdd295d298cee569076156f37048eea.scope. Feb 1 04:46:25 localhost systemd[1]: Started libcrun container. Feb 1 04:46:25 localhost podman[294233]: 2026-02-01 09:46:25.314464308 +0000 UTC m=+0.120513774 container init 3c92c8dbf22e88f763f483ea9e812282bbdd295d298cee569076156f37048eea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_chandrasekhar, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, release=1764794109, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, ceph=True) Feb 1 04:46:25 localhost podman[294233]: 2026-02-01 09:46:25.322014575 +0000 UTC m=+0.128064001 container start 3c92c8dbf22e88f763f483ea9e812282bbdd295d298cee569076156f37048eea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_chandrasekhar, vendor=Red Hat, Inc., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, release=1764794109, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container) Feb 1 04:46:25 localhost podman[294233]: 2026-02-01 09:46:25.32216612 +0000 UTC m=+0.128215586 container attach 3c92c8dbf22e88f763f483ea9e812282bbdd295d298cee569076156f37048eea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_chandrasekhar, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, release=1764794109, io.openshift.tags=rhceph ceph, name=rhceph, version=7, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main) Feb 1 04:46:25 localhost pensive_chandrasekhar[294248]: 167 167 Feb 1 04:46:25 localhost systemd[1]: libpod-3c92c8dbf22e88f763f483ea9e812282bbdd295d298cee569076156f37048eea.scope: Deactivated successfully. Feb 1 04:46:25 localhost podman[294233]: 2026-02-01 09:46:25.325799523 +0000 UTC m=+0.131848999 container died 3c92c8dbf22e88f763f483ea9e812282bbdd295d298cee569076156f37048eea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_chandrasekhar, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., ceph=True, name=rhceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:46:25 localhost podman[294233]: 2026-02-01 09:46:25.235125615 +0000 UTC m=+0.041175121 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:25 localhost podman[294253]: 2026-02-01 09:46:25.41031064 +0000 UTC m=+0.072228673 container remove 3c92c8dbf22e88f763f483ea9e812282bbdd295d298cee569076156f37048eea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_chandrasekhar, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, version=7, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1764794109, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:46:25 localhost systemd[1]: libpod-conmon-3c92c8dbf22e88f763f483ea9e812282bbdd295d298cee569076156f37048eea.scope: Deactivated successfully. Feb 1 04:46:25 localhost systemd[1]: var-lib-containers-storage-overlay-2b1b8a8c9e98b7aa8b652aab237c92cd4c11beb3559288e18608219177f3cdd5-merged.mount: Deactivated successfully. Feb 1 04:46:25 localhost nova_compute[274317]: 2026-02-01 09:46:25.972 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:26 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604215.uhhqtv (monmap changed)... Feb 1 04:46:26 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604215.uhhqtv on np0005604215.localdomain Feb 1 04:46:26 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:26 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:26 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:46:26 localhost podman[294322]: Feb 1 04:46:26 localhost podman[294322]: 2026-02-01 09:46:26.139933291 +0000 UTC m=+0.087573293 container create dcbca3df31049aa99a02d838ee0849fde295b92c7c6c69ad04e2b6252e50a22f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , version=7) Feb 1 04:46:26 localhost systemd[1]: Started libpod-conmon-dcbca3df31049aa99a02d838ee0849fde295b92c7c6c69ad04e2b6252e50a22f.scope. Feb 1 04:46:26 localhost systemd[1]: Started libcrun container. Feb 1 04:46:26 localhost podman[294322]: 2026-02-01 09:46:26.196241853 +0000 UTC m=+0.143881855 container init dcbca3df31049aa99a02d838ee0849fde295b92c7c6c69ad04e2b6252e50a22f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=1764794109, com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4) Feb 1 04:46:26 localhost podman[294322]: 2026-02-01 09:46:26.20540152 +0000 UTC m=+0.153041532 container start dcbca3df31049aa99a02d838ee0849fde295b92c7c6c69ad04e2b6252e50a22f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, distribution-scope=public, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.41.4) Feb 1 04:46:26 localhost quizzical_albattani[294337]: 167 167 Feb 1 04:46:26 localhost podman[294322]: 2026-02-01 09:46:26.108453175 +0000 UTC m=+0.056093207 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:26 localhost podman[294322]: 2026-02-01 09:46:26.20763589 +0000 UTC m=+0.155275902 container attach dcbca3df31049aa99a02d838ee0849fde295b92c7c6c69ad04e2b6252e50a22f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.openshift.expose-services=, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True) Feb 1 04:46:26 localhost systemd[1]: libpod-dcbca3df31049aa99a02d838ee0849fde295b92c7c6c69ad04e2b6252e50a22f.scope: Deactivated successfully. Feb 1 04:46:26 localhost podman[294322]: 2026-02-01 09:46:26.209650073 +0000 UTC m=+0.157290105 container died dcbca3df31049aa99a02d838ee0849fde295b92c7c6c69ad04e2b6252e50a22f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1764794109, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux ) Feb 1 04:46:26 localhost podman[294342]: 2026-02-01 09:46:26.30121078 +0000 UTC m=+0.079448039 container remove dcbca3df31049aa99a02d838ee0849fde295b92c7c6c69ad04e2b6252e50a22f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1764794109, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:46:26 localhost systemd[1]: libpod-conmon-dcbca3df31049aa99a02d838ee0849fde295b92c7c6c69ad04e2b6252e50a22f.scope: Deactivated successfully. Feb 1 04:46:26 localhost systemd[1]: var-lib-containers-storage-overlay-394099c3ffdb20646fdcc5a07e5886b7845fcf544a535cece1bf07640258b255-merged.mount: Deactivated successfully. Feb 1 04:46:27 localhost ceph-mon[278949]: Reconfiguring mon.np0005604215 (monmap changed)... Feb 1 04:46:27 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604215 on np0005604215.localdomain Feb 1 04:46:27 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:27 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:27 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:46:27 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:27 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55d1775411e0 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Feb 1 04:46:27 localhost ceph-mon[278949]: mon.np0005604215@1(peon) e13 my rank is now 0 (was 1) Feb 1 04:46:27 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Feb 1 04:46:27 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Feb 1 04:46:27 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55d17e178000 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Feb 1 04:46:27 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:46:27 localhost ceph-mon[278949]: paxos.0).electionLogic(56) init, last seen epoch 56 Feb 1 04:46:27 localhost ceph-mon[278949]: mon.np0005604215@0(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:46:30 localhost podman[236852]: time="2026-02-01T09:46:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:46:30 localhost podman[236852]: @ - - [01/Feb/2026:09:46:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:46:30 localhost podman[236852]: @ - - [01/Feb/2026:09:46:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17790 "" "Go-http-client/1.1" Feb 1 04:46:31 localhost openstack_network_exporter[239388]: ERROR 09:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:46:31 localhost openstack_network_exporter[239388]: Feb 1 04:46:31 localhost openstack_network_exporter[239388]: ERROR 09:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:46:31 localhost openstack_network_exporter[239388]: Feb 1 04:46:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:46:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:46:31 localhost podman[294378]: 2026-02-01 09:46:31.889985644 +0000 UTC m=+0.094044946 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:46:31 localhost systemd[1]: tmp-crun.YNiQAV.mount: Deactivated successfully. Feb 1 04:46:31 localhost podman[294377]: 2026-02-01 09:46:31.951464508 +0000 UTC m=+0.155688365 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127) Feb 1 04:46:31 localhost podman[294378]: 2026-02-01 09:46:31.957796866 +0000 UTC m=+0.161856148 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:46:31 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:46:31 localhost podman[294377]: 2026-02-01 09:46:31.995739104 +0000 UTC m=+0.199963021 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 1 04:46:32 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 is new leader, mons np0005604215,np0005604213 in quorum (ranks 0,2) Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : monmap epoch 13 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : fsid 33fac0b9-80c7-560f-918a-c92d3021ca1e Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : last_changed 2026-02-01T09:46:27.712705+0000 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : created 2026-02-01T07:37:52.883666+0000 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : election_strategy: 1 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005604215 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005604212 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005604213 Feb 1 04:46:32 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005604212.tkdkxt=up:active} 2 up:standby Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : osdmap e88: 6 total, 6 up, 6 in Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : mgrmap e44: np0005604211.cuflqz(active, since 20s), standbys: np0005604209.isqrps, np0005604213.caiaeh, np0005604215.uhhqtv, np0005604212.oynhpm Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum np0005604215,np0005604213 (MON_DOWN) Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005604215,np0005604213 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : stray daemon mgr.np0005604209.isqrps on host np0005604209.localdomain not managed by cephadm Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : stray host np0005604209.localdomain has 1 stray daemons: ['mgr.np0005604209.isqrps'] Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum np0005604215,np0005604213 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : mon.np0005604212 (rank 1) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:46:32 localhost ceph-mon[278949]: paxos.0).electionLogic(59) init, last seen epoch 59, mid-election, bumping Feb 1 04:46:32 localhost ceph-mon[278949]: mon.np0005604215@0(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : mon.np0005604215 is new leader, mons np0005604215,np0005604212,np0005604213 in quorum (ranks 0,1,2) Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : monmap epoch 13 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : fsid 33fac0b9-80c7-560f-918a-c92d3021ca1e Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : last_changed 2026-02-01T09:46:27.712705+0000 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : created 2026-02-01T07:37:52.883666+0000 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : election_strategy: 1 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005604215 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005604212 Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005604213 Feb 1 04:46:32 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005604212.tkdkxt=up:active} 2 up:standby Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : osdmap e88: 6 total, 6 up, 6 in Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : mgrmap e44: np0005604211.cuflqz(active, since 20s), standbys: np0005604209.isqrps, np0005604213.caiaeh, np0005604215.uhhqtv, np0005604212.oynhpm Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005604215,np0005604213) Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : stray daemon mgr.np0005604209.isqrps on host np0005604209.localdomain not managed by cephadm Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:46:32 localhost ceph-mon[278949]: log_channel(cluster) log [WRN] : stray host np0005604209.localdomain has 1 stray daemons: ['mgr.np0005604209.isqrps'] Feb 1 04:46:33 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 1 04:46:33 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:33 localhost ceph-mon[278949]: mon.np0005604212 calling monitor election Feb 1 04:46:33 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:46:33 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:46:33 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:46:33 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:46:33 localhost ceph-mon[278949]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005604215,np0005604213 Feb 1 04:46:33 localhost ceph-mon[278949]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Feb 1 04:46:33 localhost ceph-mon[278949]: stray daemon mgr.np0005604209.isqrps on host np0005604209.localdomain not managed by cephadm Feb 1 04:46:33 localhost ceph-mon[278949]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:46:33 localhost ceph-mon[278949]: stray host np0005604209.localdomain has 1 stray daemons: ['mgr.np0005604209.isqrps'] Feb 1 04:46:33 localhost ceph-mon[278949]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005604215,np0005604213 Feb 1 04:46:33 localhost ceph-mon[278949]: mon.np0005604212 (rank 1) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Feb 1 04:46:33 localhost ceph-mon[278949]: mon.np0005604215 calling monitor election Feb 1 04:46:33 localhost ceph-mon[278949]: mon.np0005604215 is new leader, mons np0005604215,np0005604212,np0005604213 in quorum (ranks 0,1,2) Feb 1 04:46:33 localhost ceph-mon[278949]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005604215,np0005604213) Feb 1 04:46:33 localhost ceph-mon[278949]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:46:33 localhost ceph-mon[278949]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Feb 1 04:46:33 localhost ceph-mon[278949]: stray daemon mgr.np0005604209.isqrps on host np0005604209.localdomain not managed by cephadm Feb 1 04:46:33 localhost ceph-mon[278949]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:46:33 localhost ceph-mon[278949]: stray host np0005604209.localdomain has 1 stray daemons: ['mgr.np0005604209.isqrps'] Feb 1 04:46:33 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:33 localhost ceph-mon[278949]: Removed label mon from host np0005604211.localdomain Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:34 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 1 04:46:34 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:35 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:46:35 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:46:35 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:46:35 localhost ceph-mon[278949]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:35 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:35 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:46:35 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:46:35 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:35 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 1 04:46:35 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:35 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 1 04:46:35 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:36 localhost ceph-mon[278949]: Removed label mgr from host np0005604211.localdomain Feb 1 04:46:36 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:46:36 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:46:36 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:36 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:36 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:36 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:36 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:36 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain.devices.0}] v 0) Feb 1 04:46:36 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:36 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604211.localdomain}] v 0) Feb 1 04:46:36 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:36 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 1 04:46:36 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:37 localhost ceph-mon[278949]: Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:46:37 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:46:37 localhost ceph-mon[278949]: Removed label _admin from host np0005604211.localdomain Feb 1 04:46:37 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:37 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:37 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:37 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:37 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:46:37 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:37 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:37 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:37 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:37 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:46:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:46:37 localhost systemd[1]: tmp-crun.J2UGCZ.mount: Deactivated successfully. Feb 1 04:46:37 localhost podman[294765]: 2026-02-01 09:46:37.879399359 +0000 UTC m=+0.088824722 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, name=ubi9/ubi-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc.) Feb 1 04:46:37 localhost podman[294766]: 2026-02-01 09:46:37.925388728 +0000 UTC m=+0.131807887 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:46:37 localhost podman[294765]: 2026-02-01 09:46:37.944341271 +0000 UTC m=+0.153766644 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9/ubi-minimal, release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container) Feb 1 04:46:37 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:46:37 localhost podman[294766]: 2026-02-01 09:46:37.961729646 +0000 UTC m=+0.168148825 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 1 04:46:37 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:46:38 localhost ceph-mon[278949]: Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:46:38 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:46:38 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:38 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:38 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:38 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:46:38 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:38 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:38 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:38 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:39 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:39 localhost ceph-mon[278949]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:46:39 localhost ceph-mon[278949]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:46:39 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:39 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:39 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:46:39 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:39 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:39 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:39 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:39 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 1 04:46:39 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:40 localhost ceph-mon[278949]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:46:40 localhost ceph-mon[278949]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:46:40 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:40 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:40 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:46:40 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:40 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:40 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:46:40 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:40 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:40 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:40 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:40 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 1 04:46:40 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:41 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:41 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:41 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:41 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:41 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:41 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:41 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:46:41 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:41 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:41 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:46:41 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:41 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:41 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:46:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:46:41.763 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:46:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:46:41.764 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:46:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:46:41.764 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:46:42 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:46:42 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:42 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:46:42 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:42 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 1 04:46:42 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:42 localhost nova_compute[274317]: 2026-02-01 09:46:42.195 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:42 localhost nova_compute[274317]: 2026-02-01 09:46:42.196 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:46:42 localhost nova_compute[274317]: 2026-02-01 09:46:42.196 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:46:42 localhost nova_compute[274317]: 2026-02-01 09:46:42.253 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:46:42 localhost nova_compute[274317]: 2026-02-01 09:46:42.253 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:42 localhost nova_compute[274317]: 2026-02-01 09:46:42.254 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:42 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:42 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:42 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:43 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:43 localhost ceph-mon[278949]: Reconfiguring mon.np0005604212 (monmap changed)... Feb 1 04:46:43 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:46:43 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:43 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:43 localhost ceph-mon[278949]: Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:46:43 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:43 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:43 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:46:43 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:43 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:43 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:46:43 localhost nova_compute[274317]: 2026-02-01 09:46:43.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:43 localhost nova_compute[274317]: 2026-02-01 09:46:43.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:46:43 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:44 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:44 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:44 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:44 localhost nova_compute[274317]: 2026-02-01 09:46:44.097 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:44 localhost nova_compute[274317]: 2026-02-01 09:46:44.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:44 localhost ceph-mon[278949]: Reconfiguring osd.0 (monmap changed)... Feb 1 04:46:44 localhost ceph-mon[278949]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:46:44 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:44 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:44 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:46:44 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:44 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:44 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:44 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:45 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:45 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 1 04:46:45 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:45 localhost ceph-mon[278949]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:46:45 localhost ceph-mon[278949]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:46:45 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:45 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:45 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:46:45 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:45 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:45 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:46:45 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:45 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:45 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:45 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:45 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 1 04:46:45 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.126 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.126 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.126 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.127 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.127 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:46:46 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:46 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:46 localhost ceph-mon[278949]: Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:46:46 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:46 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:46 localhost ceph-mon[278949]: Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:46:46 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:46:46 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/329388596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.600 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:46:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.818 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.819 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12351MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.820 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.820 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:46:46 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:46 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:46 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:46 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:46 localhost podman[294826]: 2026-02-01 09:46:46.873834008 +0000 UTC m=+0.083655490 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2) Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.880 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.881 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:46:46 localhost podman[294826]: 2026-02-01 09:46:46.888719725 +0000 UTC m=+0.098541217 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 1 04:46:46 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:46:46 localhost nova_compute[274317]: 2026-02-01 09:46:46.900 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:46:47 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:47 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:47 localhost ceph-mon[278949]: Reconfiguring mon.np0005604213 (monmap changed)... Feb 1 04:46:47 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:46:47 localhost ceph-mon[278949]: Reconfiguring daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:46:47 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 1 04:46:47 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:47 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 1 04:46:47 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:47 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:46:47 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/728333592' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:46:47 localhost nova_compute[274317]: 2026-02-01 09:46:47.340 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:46:47 localhost nova_compute[274317]: 2026-02-01 09:46:47.345 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:46:47 localhost nova_compute[274317]: 2026-02-01 09:46:47.382 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:46:47 localhost nova_compute[274317]: 2026-02-01 09:46:47.384 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:46:47 localhost nova_compute[274317]: 2026-02-01 09:46:47.384 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:46:47 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:46:47 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:47 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:46:47 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:47 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 1 04:46:47 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:48 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:48 localhost ceph-mon[278949]: Added label _no_schedule to host np0005604211.localdomain Feb 1 04:46:48 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:48 localhost ceph-mon[278949]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005604211.localdomain Feb 1 04:46:48 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:48 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:48 localhost ceph-mon[278949]: Reconfiguring crash.np0005604215 (monmap changed)... Feb 1 04:46:48 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:48 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:46:48 localhost ceph-mon[278949]: Reconfiguring daemon crash.np0005604215 on np0005604215.localdomain Feb 1 04:46:48 localhost podman[294921]: Feb 1 04:46:48 localhost podman[294921]: 2026-02-01 09:46:48.319831817 +0000 UTC m=+0.076414214 container create a54934a68e008827a9a722fc7212de528d14b5e1624e1edfbe0ad44b56871f2d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_brown, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, architecture=x86_64, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:46:48 localhost systemd[1]: Started libpod-conmon-a54934a68e008827a9a722fc7212de528d14b5e1624e1edfbe0ad44b56871f2d.scope. Feb 1 04:46:48 localhost systemd[1]: Started libcrun container. Feb 1 04:46:48 localhost nova_compute[274317]: 2026-02-01 09:46:48.385 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:46:48 localhost podman[294921]: 2026-02-01 09:46:48.38671 +0000 UTC m=+0.143292447 container init a54934a68e008827a9a722fc7212de528d14b5e1624e1edfbe0ad44b56871f2d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_brown, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, version=7, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, architecture=x86_64, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7) Feb 1 04:46:48 localhost podman[294921]: 2026-02-01 09:46:48.290339674 +0000 UTC m=+0.046922121 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:48 localhost podman[294921]: 2026-02-01 09:46:48.402198205 +0000 UTC m=+0.158780602 container start a54934a68e008827a9a722fc7212de528d14b5e1624e1edfbe0ad44b56871f2d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_brown, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, version=7, ceph=True, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:46:48 localhost podman[294921]: 2026-02-01 09:46:48.403345721 +0000 UTC m=+0.159928158 container attach a54934a68e008827a9a722fc7212de528d14b5e1624e1edfbe0ad44b56871f2d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_brown, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, name=rhceph, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:46:48 localhost suspicious_brown[294936]: 167 167 Feb 1 04:46:48 localhost systemd[1]: libpod-a54934a68e008827a9a722fc7212de528d14b5e1624e1edfbe0ad44b56871f2d.scope: Deactivated successfully. Feb 1 04:46:48 localhost podman[294921]: 2026-02-01 09:46:48.406654504 +0000 UTC m=+0.163236961 container died a54934a68e008827a9a722fc7212de528d14b5e1624e1edfbe0ad44b56871f2d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_brown, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=1764794109, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, io.buildah.version=1.41.4, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:46:48 localhost podman[294941]: 2026-02-01 09:46:48.499884204 +0000 UTC m=+0.080139570 container remove a54934a68e008827a9a722fc7212de528d14b5e1624e1edfbe0ad44b56871f2d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_brown, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, distribution-scope=public, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:46:48 localhost systemd[1]: libpod-conmon-a54934a68e008827a9a722fc7212de528d14b5e1624e1edfbe0ad44b56871f2d.scope: Deactivated successfully. Feb 1 04:46:48 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:46:48 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:48 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:46:48 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:49 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:49 localhost podman[295011]: Feb 1 04:46:49 localhost podman[295011]: 2026-02-01 09:46:49.200770646 +0000 UTC m=+0.073212643 container create 0efa30b9489ec26262c948305f9cd42c16bcf7818e570a8de554aaf38f2b1d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_cerf, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., vcs-type=git, release=1764794109, distribution-scope=public, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:46:49 localhost systemd[1]: Started libpod-conmon-0efa30b9489ec26262c948305f9cd42c16bcf7818e570a8de554aaf38f2b1d3d.scope. Feb 1 04:46:49 localhost systemd[1]: Started libcrun container. Feb 1 04:46:49 localhost podman[295011]: 2026-02-01 09:46:49.262412585 +0000 UTC m=+0.134854582 container init 0efa30b9489ec26262c948305f9cd42c16bcf7818e570a8de554aaf38f2b1d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_cerf, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, RELEASE=main, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:46:49 localhost podman[295011]: 2026-02-01 09:46:49.171614353 +0000 UTC m=+0.044056350 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:49 localhost podman[295011]: 2026-02-01 09:46:49.273062759 +0000 UTC m=+0.145504746 container start 0efa30b9489ec26262c948305f9cd42c16bcf7818e570a8de554aaf38f2b1d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_cerf, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, ceph=True, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux ) Feb 1 04:46:49 localhost busy_cerf[295026]: 167 167 Feb 1 04:46:49 localhost podman[295011]: 2026-02-01 09:46:49.273381289 +0000 UTC m=+0.145823286 container attach 0efa30b9489ec26262c948305f9cd42c16bcf7818e570a8de554aaf38f2b1d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_cerf, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , name=rhceph, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:46:49 localhost systemd[1]: libpod-0efa30b9489ec26262c948305f9cd42c16bcf7818e570a8de554aaf38f2b1d3d.scope: Deactivated successfully. Feb 1 04:46:49 localhost podman[295011]: 2026-02-01 09:46:49.278688815 +0000 UTC m=+0.151130832 container died 0efa30b9489ec26262c948305f9cd42c16bcf7818e570a8de554aaf38f2b1d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_cerf, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, CEPH_POINT_RELEASE=, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, release=1764794109, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, build-date=2025-12-08T17:28:53Z) Feb 1 04:46:49 localhost systemd[1]: tmp-crun.boHbY4.mount: Deactivated successfully. Feb 1 04:46:49 localhost systemd[1]: var-lib-containers-storage-overlay-e11f52e1d3bcd452f9d900af7111a68cb0f04ee9b60ee4323eb0c70b3eec4005-merged.mount: Deactivated successfully. Feb 1 04:46:49 localhost systemd[1]: var-lib-containers-storage-overlay-f41329cc93b1e237145818483abb0af9dd83da1b15a02825f45bfa12b653db5c-merged.mount: Deactivated successfully. Feb 1 04:46:49 localhost podman[295031]: 2026-02-01 09:46:49.367418342 +0000 UTC m=+0.082745471 container remove 0efa30b9489ec26262c948305f9cd42c16bcf7818e570a8de554aaf38f2b1d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_cerf, RELEASE=main, name=rhceph, ceph=True, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , release=1764794109, architecture=x86_64) Feb 1 04:46:49 localhost systemd[1]: libpod-conmon-0efa30b9489ec26262c948305f9cd42c16bcf7818e570a8de554aaf38f2b1d3d.scope: Deactivated successfully. Feb 1 04:46:49 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:46:49 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:49 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:46:49 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:49 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:49 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:49 localhost ceph-mon[278949]: Reconfiguring osd.2 (monmap changed)... Feb 1 04:46:49 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 1 04:46:49 localhost ceph-mon[278949]: Reconfiguring daemon osd.2 on np0005604215.localdomain Feb 1 04:46:49 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:49 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:49 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 1 04:46:49 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 1 04:46:49 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:49 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain"} v 0) Feb 1 04:46:49 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain"} : dispatch Feb 1 04:46:49 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain"}]': finished Feb 1 04:46:50 localhost podman[295106]: Feb 1 04:46:50 localhost podman[295106]: 2026-02-01 09:46:50.157515577 +0000 UTC m=+0.103746219 container create 5097327bb2f8f0b79b6e22a981dc9508f1ae6bb9d2051b3685585b0c11763779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_gagarin, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, release=1764794109, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, distribution-scope=public) Feb 1 04:46:50 localhost systemd[1]: Started libpod-conmon-5097327bb2f8f0b79b6e22a981dc9508f1ae6bb9d2051b3685585b0c11763779.scope. Feb 1 04:46:50 localhost systemd[1]: Started libcrun container. Feb 1 04:46:50 localhost podman[295106]: 2026-02-01 09:46:50.221121819 +0000 UTC m=+0.167352461 container init 5097327bb2f8f0b79b6e22a981dc9508f1ae6bb9d2051b3685585b0c11763779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_gagarin, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, release=1764794109, ceph=True, name=rhceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:46:50 localhost podman[295106]: 2026-02-01 09:46:50.129659886 +0000 UTC m=+0.075890578 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:50 localhost podman[295106]: 2026-02-01 09:46:50.229853732 +0000 UTC m=+0.176084374 container start 5097327bb2f8f0b79b6e22a981dc9508f1ae6bb9d2051b3685585b0c11763779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_gagarin, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, release=1764794109, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main) Feb 1 04:46:50 localhost podman[295106]: 2026-02-01 09:46:50.23012396 +0000 UTC m=+0.176354602 container attach 5097327bb2f8f0b79b6e22a981dc9508f1ae6bb9d2051b3685585b0c11763779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_gagarin, ceph=True, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1764794109, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, RELEASE=main, architecture=x86_64) Feb 1 04:46:50 localhost funny_gagarin[295122]: 167 167 Feb 1 04:46:50 localhost systemd[1]: libpod-5097327bb2f8f0b79b6e22a981dc9508f1ae6bb9d2051b3685585b0c11763779.scope: Deactivated successfully. Feb 1 04:46:50 localhost podman[295106]: 2026-02-01 09:46:50.234097395 +0000 UTC m=+0.180328037 container died 5097327bb2f8f0b79b6e22a981dc9508f1ae6bb9d2051b3685585b0c11763779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_gagarin, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, build-date=2025-12-08T17:28:53Z, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, release=1764794109, name=rhceph, io.buildah.version=1.41.4) Feb 1 04:46:50 localhost systemd[1]: var-lib-containers-storage-overlay-5095ae3d1f1875cf72808787b0464c8d94a8ff08d48db63c17468128e829c2ee-merged.mount: Deactivated successfully. Feb 1 04:46:50 localhost podman[295127]: 2026-02-01 09:46:50.329619856 +0000 UTC m=+0.086571472 container remove 5097327bb2f8f0b79b6e22a981dc9508f1ae6bb9d2051b3685585b0c11763779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_gagarin, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, release=1764794109, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:46:50 localhost systemd[1]: libpod-conmon-5097327bb2f8f0b79b6e22a981dc9508f1ae6bb9d2051b3685585b0c11763779.scope: Deactivated successfully. Feb 1 04:46:50 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:46:50 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:50 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:46:50 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:50 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 1 04:46:50 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:50 localhost ceph-mon[278949]: Reconfiguring osd.5 (monmap changed)... Feb 1 04:46:50 localhost ceph-mon[278949]: Reconfiguring daemon osd.5 on np0005604215.localdomain Feb 1 04:46:50 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:50 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain"} : dispatch Feb 1 04:46:50 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain"} : dispatch Feb 1 04:46:50 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain"}]': finished Feb 1 04:46:50 localhost ceph-mon[278949]: Removed host np0005604211.localdomain Feb 1 04:46:50 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:50 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:50 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:50 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:46:51 localhost podman[295204]: Feb 1 04:46:51 localhost podman[295204]: 2026-02-01 09:46:51.138198309 +0000 UTC m=+0.073076999 container create 4bbde1a943942ea234759b6d4a8cf23823f1a309cbec76566bfc46f066fb450d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_austin, io.openshift.tags=rhceph ceph, release=1764794109, vendor=Red Hat, Inc., name=rhceph, build-date=2025-12-08T17:28:53Z, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=) Feb 1 04:46:51 localhost systemd[1]: Started libpod-conmon-4bbde1a943942ea234759b6d4a8cf23823f1a309cbec76566bfc46f066fb450d.scope. Feb 1 04:46:51 localhost systemd[1]: Started libcrun container. Feb 1 04:46:51 localhost podman[295204]: 2026-02-01 09:46:51.1085298 +0000 UTC m=+0.043408510 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:51 localhost podman[295204]: 2026-02-01 09:46:51.218113811 +0000 UTC m=+0.152992511 container init 4bbde1a943942ea234759b6d4a8cf23823f1a309cbec76566bfc46f066fb450d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_austin, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, distribution-scope=public, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph) Feb 1 04:46:51 localhost podman[295204]: 2026-02-01 09:46:51.227677021 +0000 UTC m=+0.162555711 container start 4bbde1a943942ea234759b6d4a8cf23823f1a309cbec76566bfc46f066fb450d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_austin, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, version=7, release=1764794109, GIT_CLEAN=True) Feb 1 04:46:51 localhost podman[295204]: 2026-02-01 09:46:51.228071683 +0000 UTC m=+0.162950403 container attach 4bbde1a943942ea234759b6d4a8cf23823f1a309cbec76566bfc46f066fb450d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_austin, release=1764794109, distribution-scope=public, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.created=2025-12-08T17:28:53Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Feb 1 04:46:51 localhost blissful_austin[295219]: 167 167 Feb 1 04:46:51 localhost systemd[1]: libpod-4bbde1a943942ea234759b6d4a8cf23823f1a309cbec76566bfc46f066fb450d.scope: Deactivated successfully. Feb 1 04:46:51 localhost podman[295204]: 2026-02-01 09:46:51.230823249 +0000 UTC m=+0.165701949 container died 4bbde1a943942ea234759b6d4a8cf23823f1a309cbec76566bfc46f066fb450d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_austin, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, vcs-type=git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109) Feb 1 04:46:51 localhost systemd[1]: var-lib-containers-storage-overlay-30ab57fa4e60f5c9ca36354f9d3d7e15d27c733f3ff3b7ceb7d3c889f265c436-merged.mount: Deactivated successfully. Feb 1 04:46:51 localhost podman[295224]: 2026-02-01 09:46:51.327170265 +0000 UTC m=+0.083443424 container remove 4bbde1a943942ea234759b6d4a8cf23823f1a309cbec76566bfc46f066fb450d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_austin, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Feb 1 04:46:51 localhost systemd[1]: libpod-conmon-4bbde1a943942ea234759b6d4a8cf23823f1a309cbec76566bfc46f066fb450d.scope: Deactivated successfully. Feb 1 04:46:51 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:46:51 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:51 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:46:51 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:51 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 1 04:46:51 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:51 localhost ceph-mon[278949]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:46:51 localhost ceph-mon[278949]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:46:51 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:51 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:46:51 localhost ceph-mon[278949]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:51 localhost ceph-mon[278949]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:46:52 localhost podman[295294]: Feb 1 04:46:52 localhost podman[295294]: 2026-02-01 09:46:52.069496344 +0000 UTC m=+0.072613314 container create b07e8d9c31d2c18b4e8772dc855f8241f248ecf7cc75e90f5f1f179d84ca4224 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hofstadter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1764794109, io.buildah.version=1.41.4, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:46:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:46:52 localhost systemd[1]: Started libpod-conmon-b07e8d9c31d2c18b4e8772dc855f8241f248ecf7cc75e90f5f1f179d84ca4224.scope. Feb 1 04:46:52 localhost systemd[1]: Started libcrun container. Feb 1 04:46:52 localhost podman[295294]: 2026-02-01 09:46:52.128905514 +0000 UTC m=+0.132022474 container init b07e8d9c31d2c18b4e8772dc855f8241f248ecf7cc75e90f5f1f179d84ca4224 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hofstadter, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, release=1764794109, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:46:52 localhost systemd[1]: tmp-crun.7CYf8s.mount: Deactivated successfully. Feb 1 04:46:52 localhost podman[295294]: 2026-02-01 09:46:52.041832978 +0000 UTC m=+0.044949978 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:46:52 localhost podman[295294]: 2026-02-01 09:46:52.145840414 +0000 UTC m=+0.148957374 container start b07e8d9c31d2c18b4e8772dc855f8241f248ecf7cc75e90f5f1f179d84ca4224 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hofstadter, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, release=1764794109, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:46:52 localhost lucid_hofstadter[295310]: 167 167 Feb 1 04:46:52 localhost podman[295294]: 2026-02-01 09:46:52.146125303 +0000 UTC m=+0.149242313 container attach b07e8d9c31d2c18b4e8772dc855f8241f248ecf7cc75e90f5f1f179d84ca4224 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hofstadter, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=1764794109, io.openshift.expose-services=, GIT_CLEAN=True) Feb 1 04:46:52 localhost systemd[1]: libpod-b07e8d9c31d2c18b4e8772dc855f8241f248ecf7cc75e90f5f1f179d84ca4224.scope: Deactivated successfully. Feb 1 04:46:52 localhost podman[295294]: 2026-02-01 09:46:52.151195822 +0000 UTC m=+0.154312812 container died b07e8d9c31d2c18b4e8772dc855f8241f248ecf7cc75e90f5f1f179d84ca4224 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hofstadter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, release=1764794109, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux ) Feb 1 04:46:52 localhost sshd[295338]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:46:52 localhost podman[295309]: 2026-02-01 09:46:52.203797429 +0000 UTC m=+0.089339568 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:46:52 localhost podman[295323]: 2026-02-01 09:46:52.276911037 +0000 UTC m=+0.124207970 container remove b07e8d9c31d2c18b4e8772dc855f8241f248ecf7cc75e90f5f1f179d84ca4224 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hofstadter, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:46:52 localhost systemd[1]: libpod-conmon-b07e8d9c31d2c18b4e8772dc855f8241f248ecf7cc75e90f5f1f179d84ca4224.scope: Deactivated successfully. Feb 1 04:46:52 localhost podman[295309]: 2026-02-01 09:46:52.291879446 +0000 UTC m=+0.177421575 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:46:52 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:46:52 localhost systemd-logind[761]: New session 71 of user tripleo-admin. Feb 1 04:46:52 localhost systemd[1]: Created slice User Slice of UID 1003. Feb 1 04:46:52 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Feb 1 04:46:52 localhost systemd[1]: var-lib-containers-storage-overlay-8eccd5196d661e5de1cbd620e5a8927c8d0bbc2e9c327afd3a9f0f5f8ca48b22-merged.mount: Deactivated successfully. Feb 1 04:46:52 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Feb 1 04:46:52 localhost systemd[1]: Starting User Manager for UID 1003... Feb 1 04:46:52 localhost systemd[295356]: Queued start job for default target Main User Target. Feb 1 04:46:52 localhost systemd[295356]: Created slice User Application Slice. Feb 1 04:46:52 localhost systemd[295356]: Started Mark boot as successful after the user session has run 2 minutes. Feb 1 04:46:52 localhost systemd[295356]: Started Daily Cleanup of User's Temporary Directories. Feb 1 04:46:52 localhost systemd[295356]: Reached target Paths. Feb 1 04:46:52 localhost systemd[295356]: Reached target Timers. Feb 1 04:46:52 localhost systemd[295356]: Starting D-Bus User Message Bus Socket... Feb 1 04:46:52 localhost systemd[295356]: Starting Create User's Volatile Files and Directories... Feb 1 04:46:52 localhost systemd[295356]: Listening on D-Bus User Message Bus Socket. Feb 1 04:46:52 localhost systemd[295356]: Reached target Sockets. Feb 1 04:46:52 localhost systemd[295356]: Finished Create User's Volatile Files and Directories. Feb 1 04:46:52 localhost systemd[295356]: Reached target Basic System. Feb 1 04:46:52 localhost systemd[295356]: Reached target Main User Target. Feb 1 04:46:52 localhost systemd[295356]: Startup finished in 144ms. Feb 1 04:46:52 localhost systemd[1]: Started User Manager for UID 1003. Feb 1 04:46:52 localhost systemd[1]: Started Session 71 of User tripleo-admin. Feb 1 04:46:53 localhost python3[295498]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.105/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 1 04:46:53 localhost python3[295644]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.105/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:46:54 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:46:54 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.105:6800/155238379 Feb 1 04:46:54 localhost python3[295789]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.105 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 04:46:59 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:47:00 localhost podman[236852]: time="2026-02-01T09:47:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:47:00 localhost podman[236852]: @ - - [01/Feb/2026:09:47:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:47:00 localhost podman[236852]: @ - - [01/Feb/2026:09:47:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17794 "" "Go-http-client/1.1" Feb 1 04:47:01 localhost openstack_network_exporter[239388]: ERROR 09:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:47:01 localhost openstack_network_exporter[239388]: Feb 1 04:47:01 localhost openstack_network_exporter[239388]: ERROR 09:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:47:01 localhost openstack_network_exporter[239388]: Feb 1 04:47:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:47:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:47:02 localhost podman[295808]: 2026-02-01 09:47:02.871245064 +0000 UTC m=+0.086336634 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 1 04:47:02 localhost podman[295809]: 2026-02-01 09:47:02.947623805 +0000 UTC m=+0.160984731 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:47:02 localhost podman[295809]: 2026-02-01 09:47:02.959980202 +0000 UTC m=+0.173341178 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:47:02 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:47:02 localhost podman[295808]: 2026-02-01 09:47:02.977677996 +0000 UTC m=+0.192769556 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0) Feb 1 04:47:02 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.405 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:47:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:47:04 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:47:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:47:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:47:08 localhost podman[295856]: 2026-02-01 09:47:08.8644615 +0000 UTC m=+0.077855479 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, config_id=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 04:47:08 localhost podman[295856]: 2026-02-01 09:47:08.878740277 +0000 UTC m=+0.092134246 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, architecture=x86_64, release=1769056855, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 04:47:08 localhost systemd[1]: tmp-crun.8Y3E6J.mount: Deactivated successfully. Feb 1 04:47:08 localhost podman[295857]: 2026-02-01 09:47:08.920128873 +0000 UTC m=+0.129604568 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Feb 1 04:47:08 localhost podman[295857]: 2026-02-01 09:47:08.929780516 +0000 UTC m=+0.139256201 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:47:08 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:47:08 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:47:09 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:47:14 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:47:17 localhost podman[295894]: 2026-02-01 09:47:17.857910938 +0000 UTC m=+0.077051773 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127) Feb 1 04:47:17 localhost podman[295894]: 2026-02-01 09:47:17.868366955 +0000 UTC m=+0.087507810 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute) Feb 1 04:47:17 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:47:19 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:47:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:47:22 localhost podman[295913]: 2026-02-01 09:47:22.861978916 +0000 UTC m=+0.076754974 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:47:22 localhost podman[295913]: 2026-02-01 09:47:22.87263987 +0000 UTC m=+0.087415958 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:47:22 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e88 do_prune osdmap full prune enabled Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : Activating manager daemon np0005604209.isqrps Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : Manager daemon np0005604211.cuflqz is unresponsive, replacing it with standby daemon np0005604209.isqrps Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e89 e89: 6 total, 6 up, 6 in Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : mgrmap e45: np0005604209.isqrps(active, starting, since 0.0467249s), standbys: np0005604213.caiaeh, np0005604215.uhhqtv, np0005604212.oynhpm Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604212"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata", "id": "np0005604212"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604215"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata", "id": "np0005604215"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604215.rwvxvg"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mds metadata", "who": "mds.np0005604215.rwvxvg"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader).mds e16 all = 0 Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604213.jdbvyh"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mds metadata", "who": "mds.np0005604213.jdbvyh"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader).mds e16 all = 0 Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604212.tkdkxt"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mds metadata", "who": "mds.np0005604212.tkdkxt"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader).mds e16 all = 0 Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604209.isqrps", "id": "np0005604209.isqrps"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604209.isqrps", "id": "np0005604209.isqrps"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604213.caiaeh", "id": "np0005604213.caiaeh"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604213.caiaeh", "id": "np0005604213.caiaeh"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604215.uhhqtv", "id": "np0005604215.uhhqtv"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604215.uhhqtv", "id": "np0005604215.uhhqtv"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604212.oynhpm", "id": "np0005604212.oynhpm"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mgr metadata", "who": "np0005604212.oynhpm", "id": "np0005604212.oynhpm"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mds metadata"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader).mds e16 all = 1 Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd metadata"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon metadata"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: Activating manager daemon np0005604209.isqrps Feb 1 04:47:24 localhost ceph-mon[278949]: Manager daemon np0005604211.cuflqz is unresponsive, replacing it with standby daemon np0005604209.isqrps Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : Manager daemon np0005604209.isqrps is now available Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"}]': finished Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"}]': finished Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/mirror_snapshot_schedule"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/mirror_snapshot_schedule"} : dispatch Feb 1 04:47:24 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/trash_purge_schedule"} v 0) Feb 1 04:47:24 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/trash_purge_schedule"} : dispatch Feb 1 04:47:24 localhost sshd[295936]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:47:24 localhost systemd-logind[761]: New session 73 of user ceph-admin. Feb 1 04:47:24 localhost systemd[1]: Started Session 73 of User ceph-admin. Feb 1 04:47:25 localhost ceph-mon[278949]: Manager daemon np0005604209.isqrps is now available Feb 1 04:47:25 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"} : dispatch Feb 1 04:47:25 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"}]': finished Feb 1 04:47:25 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"} : dispatch Feb 1 04:47:25 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"}]': finished Feb 1 04:47:25 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/mirror_snapshot_schedule"} : dispatch Feb 1 04:47:25 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/trash_purge_schedule"} : dispatch Feb 1 04:47:25 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : mgrmap e46: np0005604209.isqrps(active, since 1.16558s), standbys: np0005604213.caiaeh, np0005604215.uhhqtv, np0005604212.oynhpm Feb 1 04:47:25 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 1 04:47:25 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:25 localhost podman[296047]: 2026-02-01 09:47:25.748476981 +0000 UTC m=+0.087049966 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, build-date=2025-12-08T17:28:53Z, ceph=True, release=1764794109, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:47:25 localhost podman[296047]: 2026-02-01 09:47:25.882905099 +0000 UTC m=+0.221478104 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True) Feb 1 04:47:26 localhost ceph-mon[278949]: removing stray HostCache host record np0005604211.localdomain.devices.0 Feb 1 04:47:26 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:26 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 1 04:47:26 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 1 04:47:26 localhost ceph-mon[278949]: log_channel(cluster) log [INF] : Cluster is now healthy Feb 1 04:47:26 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:47:26 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:26 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:47:26 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:26 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:47:26 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:26 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:47:26 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:26 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:47:26 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:26 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:47:26 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:27 localhost ceph-mon[278949]: Saving service mon spec with placement label:mon Feb 1 04:47:27 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 1 04:47:27 localhost ceph-mon[278949]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 1 04:47:27 localhost ceph-mon[278949]: Cluster is now healthy Feb 1 04:47:27 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:27 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:27 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:27 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:27 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:27 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:27 localhost ceph-mon[278949]: [01/Feb/2026:09:47:27] ENGINE Bus STARTING Feb 1 04:47:27 localhost ceph-mon[278949]: [01/Feb/2026:09:47:27] ENGINE Serving on http://172.18.0.200:8765 Feb 1 04:47:27 localhost ceph-mon[278949]: [01/Feb/2026:09:47:27] ENGINE Serving on https://172.18.0.200:7150 Feb 1 04:47:27 localhost ceph-mon[278949]: [01/Feb/2026:09:47:27] ENGINE Bus STARTED Feb 1 04:47:27 localhost ceph-mon[278949]: [01/Feb/2026:09:47:27] ENGINE Client ('172.18.0.200', 32790) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:47:27 localhost ceph-mon[278949]: log_channel(cluster) log [DBG] : mgrmap e47: np0005604209.isqrps(active, since 3s), standbys: np0005604213.caiaeh, np0005604215.uhhqtv, np0005604212.oynhpm Feb 1 04:47:27 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:47:27 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:27 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:47:27 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:27 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 1 04:47:27 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:47:27 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 1 04:47:27 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:47:27 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:47:28 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:47:28 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:47:28 localhost ceph-mon[278949]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:47:28 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:47:28 localhost ceph-mon[278949]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:47:28 localhost ceph-mon[278949]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:47:28 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:28 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:28 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:29 localhost ceph-mon[278949]: mon.np0005604215@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:47:30 localhost podman[236852]: time="2026-02-01T09:47:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:47:30 localhost podman[236852]: @ - - [01/Feb/2026:09:47:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:47:30 localhost podman[236852]: @ - - [01/Feb/2026:09:47:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17792 "" "Go-http-client/1.1" Feb 1 04:47:30 localhost ceph-mon[278949]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:30 localhost ceph-mon[278949]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:30 localhost ceph-mon[278949]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:30 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:47:30 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:30 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:47:30 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:30 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:47:30 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:47:30 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:30 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:47:30 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:31 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:47:31 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:31 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:31 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:47:31 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:31 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:47:31 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:47:31 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 1 04:47:31 localhost ceph-mon[278949]: log_channel(audit) log [DBG] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "quorum_status"} : dispatch Feb 1 04:47:31 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e13 handle_command mon_command({"prefix": "mon rm", "name": "np0005604215"} v 0) Feb 1 04:47:31 localhost ceph-mon[278949]: log_channel(audit) log [INF] : from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "mon rm", "name": "np0005604215"} : dispatch Feb 1 04:47:31 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55d1775411e0 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Feb 1 04:47:31 localhost ceph-mon[278949]: mon.np0005604215@0(leader) e14 removed from monmap, suicide. Feb 1 04:47:31 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 1 04:47:31 localhost ceph-mgr[278126]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 1 04:47:31 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55d177541080 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Feb 1 04:47:31 localhost podman[296941]: 2026-02-01 09:47:31.232272628 +0000 UTC m=+0.052445123 container died e5584900e40475bfb0e0992a38ca26dd007e21b74ac9ad70262abebad82b75d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mon-np0005604215, distribution-scope=public, build-date=2025-12-08T17:28:53Z, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=) Feb 1 04:47:31 localhost systemd[1]: var-lib-containers-storage-overlay-e83968a87c9b2ae83e102a25fe5279ef93ea6c64bdb6aa577d60da58f4409de1-merged.mount: Deactivated successfully. Feb 1 04:47:31 localhost podman[296941]: 2026-02-01 09:47:31.275965675 +0000 UTC m=+0.096138130 container remove e5584900e40475bfb0e0992a38ca26dd007e21b74ac9ad70262abebad82b75d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mon-np0005604215, io.openshift.expose-services=, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, release=1764794109, com.redhat.component=rhceph-container) Feb 1 04:47:31 localhost openstack_network_exporter[239388]: ERROR 09:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:47:31 localhost openstack_network_exporter[239388]: Feb 1 04:47:31 localhost openstack_network_exporter[239388]: ERROR 09:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:47:31 localhost openstack_network_exporter[239388]: Feb 1 04:47:32 localhost systemd[1]: ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e@mon.np0005604215.service: Deactivated successfully. Feb 1 04:47:32 localhost systemd[1]: Stopped Ceph mon.np0005604215 for 33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 04:47:32 localhost systemd[1]: ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e@mon.np0005604215.service: Consumed 14.373s CPU time. Feb 1 04:47:32 localhost systemd[1]: Reloading. Feb 1 04:47:32 localhost systemd-rc-local-generator[297124]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:47:32 localhost systemd-sysv-generator[297128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:47:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:47:33 localhost podman[297134]: 2026-02-01 09:47:33.867221797 +0000 UTC m=+0.079662184 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 1 04:47:33 localhost podman[297135]: 2026-02-01 09:47:33.917207822 +0000 UTC m=+0.127464951 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:47:33 localhost podman[297134]: 2026-02-01 09:47:33.929762206 +0000 UTC m=+0.142202603 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:47:33 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:47:33 localhost podman[297135]: 2026-02-01 09:47:33.949991069 +0000 UTC m=+0.160248218 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:47:33 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:47:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:47:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:47:39 localhost podman[297180]: 2026-02-01 09:47:39.869838847 +0000 UTC m=+0.081867434 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, name=ubi9/ubi-minimal, vcs-type=git, version=9.7, config_id=openstack_network_exporter) Feb 1 04:47:39 localhost podman[297180]: 2026-02-01 09:47:39.885789256 +0000 UTC m=+0.097817823 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=openstack_network_exporter, release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container) Feb 1 04:47:39 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:47:39 localhost systemd[1]: tmp-crun.zoXmtY.mount: Deactivated successfully. Feb 1 04:47:39 localhost podman[297181]: 2026-02-01 09:47:39.989695459 +0000 UTC m=+0.194219782 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Feb 1 04:47:40 localhost podman[297181]: 2026-02-01 09:47:40.023840928 +0000 UTC m=+0.228365231 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent) Feb 1 04:47:40 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:47:41 localhost podman[297269]: Feb 1 04:47:41 localhost podman[297269]: 2026-02-01 09:47:41.717505511 +0000 UTC m=+0.078581211 container create a090c2217890b10125338593243171cd4c2405f495df00f4fa9e1b57d4c9d9e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_swanson, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, version=7, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, GIT_BRANCH=main, distribution-scope=public) Feb 1 04:47:41 localhost systemd[1]: Started libpod-conmon-a090c2217890b10125338593243171cd4c2405f495df00f4fa9e1b57d4c9d9e5.scope. Feb 1 04:47:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:47:41.764 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:47:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:47:41.765 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:47:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:47:41.765 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:47:41 localhost systemd[1]: Started libcrun container. Feb 1 04:47:41 localhost podman[297269]: 2026-02-01 09:47:41.684718754 +0000 UTC m=+0.045794484 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:47:41 localhost podman[297269]: 2026-02-01 09:47:41.790407753 +0000 UTC m=+0.151483443 container init a090c2217890b10125338593243171cd4c2405f495df00f4fa9e1b57d4c9d9e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_swanson, maintainer=Guillaume Abrioux , vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, RELEASE=main, architecture=x86_64) Feb 1 04:47:41 localhost systemd[1]: tmp-crun.WBsMUL.mount: Deactivated successfully. Feb 1 04:47:41 localhost podman[297269]: 2026-02-01 09:47:41.809708157 +0000 UTC m=+0.170783857 container start a090c2217890b10125338593243171cd4c2405f495df00f4fa9e1b57d4c9d9e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_swanson, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, architecture=x86_64, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, release=1764794109, maintainer=Guillaume Abrioux , version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=) Feb 1 04:47:41 localhost podman[297269]: 2026-02-01 09:47:41.810683237 +0000 UTC m=+0.171758957 container attach a090c2217890b10125338593243171cd4c2405f495df00f4fa9e1b57d4c9d9e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_swanson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, distribution-scope=public, CEPH_POINT_RELEASE=, release=1764794109, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:47:41 localhost gallant_swanson[297284]: 167 167 Feb 1 04:47:41 localhost systemd[1]: libpod-a090c2217890b10125338593243171cd4c2405f495df00f4fa9e1b57d4c9d9e5.scope: Deactivated successfully. Feb 1 04:47:41 localhost podman[297269]: 2026-02-01 09:47:41.815275671 +0000 UTC m=+0.176351361 container died a090c2217890b10125338593243171cd4c2405f495df00f4fa9e1b57d4c9d9e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_swanson, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, ceph=True, version=7, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Feb 1 04:47:41 localhost podman[297291]: 2026-02-01 09:47:41.911897506 +0000 UTC m=+0.087779279 container remove a090c2217890b10125338593243171cd4c2405f495df00f4fa9e1b57d4c9d9e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_swanson, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, release=1764794109, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph) Feb 1 04:47:41 localhost systemd[1]: libpod-conmon-a090c2217890b10125338593243171cd4c2405f495df00f4fa9e1b57d4c9d9e5.scope: Deactivated successfully. Feb 1 04:47:42 localhost nova_compute[274317]: 2026-02-01 09:47:42.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:42 localhost nova_compute[274317]: 2026-02-01 09:47:42.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:47:42 localhost nova_compute[274317]: 2026-02-01 09:47:42.102 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:47:42 localhost nova_compute[274317]: 2026-02-01 09:47:42.118 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:47:42 localhost nova_compute[274317]: 2026-02-01 09:47:42.118 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:42 localhost podman[297361]: Feb 1 04:47:42 localhost podman[297361]: 2026-02-01 09:47:42.641227198 +0000 UTC m=+0.065538492 container create 627079e2047fcf3f15a8576168eb5db554a5e44dc47191e90a02f447ec2e1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_khorana, vcs-type=git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:47:42 localhost systemd[1]: Started libpod-conmon-627079e2047fcf3f15a8576168eb5db554a5e44dc47191e90a02f447ec2e1074.scope. Feb 1 04:47:42 localhost systemd[1]: Started libcrun container. Feb 1 04:47:42 localhost podman[297361]: 2026-02-01 09:47:42.610538657 +0000 UTC m=+0.034850051 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:47:42 localhost podman[297361]: 2026-02-01 09:47:42.714398318 +0000 UTC m=+0.138709612 container init 627079e2047fcf3f15a8576168eb5db554a5e44dc47191e90a02f447ec2e1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_khorana, name=rhceph, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, architecture=x86_64, build-date=2025-12-08T17:28:53Z, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 1 04:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-9fdf3252c86cd77dd30ba16b0d5bab8522a6e21dec0e9b1ce9d9a425a94eb75a-merged.mount: Deactivated successfully. Feb 1 04:47:42 localhost podman[297361]: 2026-02-01 09:47:42.730696189 +0000 UTC m=+0.155007483 container start 627079e2047fcf3f15a8576168eb5db554a5e44dc47191e90a02f447ec2e1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_khorana, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git) Feb 1 04:47:42 localhost podman[297361]: 2026-02-01 09:47:42.731078291 +0000 UTC m=+0.155389585 container attach 627079e2047fcf3f15a8576168eb5db554a5e44dc47191e90a02f447ec2e1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_khorana, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, architecture=x86_64, io.openshift.expose-services=, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Feb 1 04:47:42 localhost brave_khorana[297376]: 167 167 Feb 1 04:47:42 localhost systemd[1]: libpod-627079e2047fcf3f15a8576168eb5db554a5e44dc47191e90a02f447ec2e1074.scope: Deactivated successfully. Feb 1 04:47:42 localhost podman[297361]: 2026-02-01 09:47:42.734130886 +0000 UTC m=+0.158442230 container died 627079e2047fcf3f15a8576168eb5db554a5e44dc47191e90a02f447ec2e1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_khorana, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-12-08T17:28:53Z) Feb 1 04:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-70cfb5f07481b3f3fad16c67cdb1c1c7fd1e6402160e72f58db10ad112738e8a-merged.mount: Deactivated successfully. Feb 1 04:47:42 localhost podman[297381]: 2026-02-01 09:47:42.836102159 +0000 UTC m=+0.088534483 container remove 627079e2047fcf3f15a8576168eb5db554a5e44dc47191e90a02f447ec2e1074 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_khorana, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, maintainer=Guillaume Abrioux , release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, vcs-type=git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Feb 1 04:47:42 localhost systemd[1]: libpod-conmon-627079e2047fcf3f15a8576168eb5db554a5e44dc47191e90a02f447ec2e1074.scope: Deactivated successfully. Feb 1 04:47:43 localhost nova_compute[274317]: 2026-02-01 09:47:43.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:43 localhost nova_compute[274317]: 2026-02-01 09:47:43.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:47:43 localhost podman[297458]: Feb 1 04:47:43 localhost podman[297458]: 2026-02-01 09:47:43.645578881 +0000 UTC m=+0.076466424 container create 4b770e78160ff2dde941722b3949925eed50d2fdd604b07d076f765327072379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_pare, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, ceph=True, io.openshift.expose-services=, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:47:43 localhost systemd[1]: Started libpod-conmon-4b770e78160ff2dde941722b3949925eed50d2fdd604b07d076f765327072379.scope. Feb 1 04:47:43 localhost systemd[1]: Started libcrun container. Feb 1 04:47:43 localhost podman[297458]: 2026-02-01 09:47:43.614084155 +0000 UTC m=+0.044971748 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:47:43 localhost podman[297458]: 2026-02-01 09:47:43.713860028 +0000 UTC m=+0.144747571 container init 4b770e78160ff2dde941722b3949925eed50d2fdd604b07d076f765327072379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_pare, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, io.buildah.version=1.41.4, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=) Feb 1 04:47:43 localhost podman[297458]: 2026-02-01 09:47:43.725041198 +0000 UTC m=+0.155928741 container start 4b770e78160ff2dde941722b3949925eed50d2fdd604b07d076f765327072379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_pare, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, release=1764794109) Feb 1 04:47:43 localhost podman[297458]: 2026-02-01 09:47:43.725367598 +0000 UTC m=+0.156255181 container attach 4b770e78160ff2dde941722b3949925eed50d2fdd604b07d076f765327072379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_pare, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, vcs-type=git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, io.buildah.version=1.41.4, RELEASE=main, vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True) Feb 1 04:47:43 localhost naughty_pare[297473]: 167 167 Feb 1 04:47:43 localhost systemd[1]: libpod-4b770e78160ff2dde941722b3949925eed50d2fdd604b07d076f765327072379.scope: Deactivated successfully. Feb 1 04:47:43 localhost podman[297458]: 2026-02-01 09:47:43.727559457 +0000 UTC m=+0.158447010 container died 4b770e78160ff2dde941722b3949925eed50d2fdd604b07d076f765327072379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_pare, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, architecture=x86_64, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git) Feb 1 04:47:43 localhost systemd[1]: var-lib-containers-storage-overlay-d865387eec8712add5ecac852d57eea35d162b81658ca3d6623164bee555a68e-merged.mount: Deactivated successfully. Feb 1 04:47:43 localhost podman[297479]: 2026-02-01 09:47:43.827536857 +0000 UTC m=+0.083787874 container remove 4b770e78160ff2dde941722b3949925eed50d2fdd604b07d076f765327072379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_pare, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=1764794109, architecture=x86_64) Feb 1 04:47:43 localhost systemd[1]: libpod-conmon-4b770e78160ff2dde941722b3949925eed50d2fdd604b07d076f765327072379.scope: Deactivated successfully. Feb 1 04:47:44 localhost nova_compute[274317]: 2026-02-01 09:47:44.097 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:44 localhost nova_compute[274317]: 2026-02-01 09:47:44.098 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:44 localhost nova_compute[274317]: 2026-02-01 09:47:44.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:44 localhost podman[297555]: Feb 1 04:47:44 localhost podman[297555]: 2026-02-01 09:47:44.690118331 +0000 UTC m=+0.076248478 container create bc1a00cc4bfacedea660fa9cf40dd7d4c83c5e887dbdea713da30dcf655c9f6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kapitsa, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z) Feb 1 04:47:44 localhost systemd[1]: Started libpod-conmon-bc1a00cc4bfacedea660fa9cf40dd7d4c83c5e887dbdea713da30dcf655c9f6d.scope. Feb 1 04:47:44 localhost systemd[1]: Started libcrun container. Feb 1 04:47:44 localhost podman[297555]: 2026-02-01 09:47:44.755316792 +0000 UTC m=+0.141446959 container init bc1a00cc4bfacedea660fa9cf40dd7d4c83c5e887dbdea713da30dcf655c9f6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kapitsa, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 1 04:47:44 localhost podman[297555]: 2026-02-01 09:47:44.660919797 +0000 UTC m=+0.047049994 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:47:44 localhost systemd[1]: tmp-crun.7d4eaN.mount: Deactivated successfully. Feb 1 04:47:44 localhost kind_kapitsa[297570]: 167 167 Feb 1 04:47:44 localhost systemd[1]: libpod-bc1a00cc4bfacedea660fa9cf40dd7d4c83c5e887dbdea713da30dcf655c9f6d.scope: Deactivated successfully. Feb 1 04:47:44 localhost podman[297555]: 2026-02-01 09:47:44.773247283 +0000 UTC m=+0.159377440 container start bc1a00cc4bfacedea660fa9cf40dd7d4c83c5e887dbdea713da30dcf655c9f6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kapitsa, ceph=True, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, release=1764794109, version=7, vcs-type=git, distribution-scope=public, build-date=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Feb 1 04:47:44 localhost podman[297555]: 2026-02-01 09:47:44.773582924 +0000 UTC m=+0.159713121 container attach bc1a00cc4bfacedea660fa9cf40dd7d4c83c5e887dbdea713da30dcf655c9f6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kapitsa, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-12-08T17:28:53Z, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, name=rhceph) Feb 1 04:47:44 localhost podman[297555]: 2026-02-01 09:47:44.777100314 +0000 UTC m=+0.163230501 container died bc1a00cc4bfacedea660fa9cf40dd7d4c83c5e887dbdea713da30dcf655c9f6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kapitsa, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7) Feb 1 04:47:44 localhost podman[297575]: 2026-02-01 09:47:44.872532902 +0000 UTC m=+0.084375402 container remove bc1a00cc4bfacedea660fa9cf40dd7d4c83c5e887dbdea713da30dcf655c9f6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kapitsa, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Feb 1 04:47:44 localhost systemd[1]: libpod-conmon-bc1a00cc4bfacedea660fa9cf40dd7d4c83c5e887dbdea713da30dcf655c9f6d.scope: Deactivated successfully. Feb 1 04:47:45 localhost podman[297644]: Feb 1 04:47:45 localhost podman[297644]: 2026-02-01 09:47:45.5717244 +0000 UTC m=+0.080896732 container create 5f2e5d2550a2cea1bbcc92e862efc358d725bf143043da6b08a687f7de89430f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_wozniak, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.expose-services=) Feb 1 04:47:45 localhost systemd[1]: Started libpod-conmon-5f2e5d2550a2cea1bbcc92e862efc358d725bf143043da6b08a687f7de89430f.scope. Feb 1 04:47:45 localhost systemd[1]: Started libcrun container. Feb 1 04:47:45 localhost podman[297644]: 2026-02-01 09:47:45.632245055 +0000 UTC m=+0.141417397 container init 5f2e5d2550a2cea1bbcc92e862efc358d725bf143043da6b08a687f7de89430f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_wozniak, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, maintainer=Guillaume Abrioux , release=1764794109, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-12-08T17:28:53Z, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:47:45 localhost podman[297644]: 2026-02-01 09:47:45.535805307 +0000 UTC m=+0.044977649 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:47:45 localhost podman[297644]: 2026-02-01 09:47:45.641386071 +0000 UTC m=+0.150558403 container start 5f2e5d2550a2cea1bbcc92e862efc358d725bf143043da6b08a687f7de89430f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_wozniak, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1764794109, io.buildah.version=1.41.4) Feb 1 04:47:45 localhost podman[297644]: 2026-02-01 09:47:45.641713531 +0000 UTC m=+0.150885903 container attach 5f2e5d2550a2cea1bbcc92e862efc358d725bf143043da6b08a687f7de89430f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_wozniak, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2025-12-08T17:28:53Z, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, release=1764794109, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, version=7, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:47:45 localhost hungry_wozniak[297659]: 167 167 Feb 1 04:47:45 localhost systemd[1]: libpod-5f2e5d2550a2cea1bbcc92e862efc358d725bf143043da6b08a687f7de89430f.scope: Deactivated successfully. Feb 1 04:47:45 localhost podman[297644]: 2026-02-01 09:47:45.644170668 +0000 UTC m=+0.153343000 container died 5f2e5d2550a2cea1bbcc92e862efc358d725bf143043da6b08a687f7de89430f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_wozniak, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, GIT_CLEAN=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:47:45 localhost systemd[1]: var-lib-containers-storage-overlay-da9b2cbf6412129e240e02aa28ac27361f675fa9db3a0d86033788be561790c5-merged.mount: Deactivated successfully. Feb 1 04:47:45 localhost podman[297664]: 2026-02-01 09:47:45.740891507 +0000 UTC m=+0.082851565 container remove 5f2e5d2550a2cea1bbcc92e862efc358d725bf143043da6b08a687f7de89430f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_wozniak, com.redhat.component=rhceph-container, release=1764794109, build-date=2025-12-08T17:28:53Z, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main) Feb 1 04:47:45 localhost systemd[1]: libpod-conmon-5f2e5d2550a2cea1bbcc92e862efc358d725bf143043da6b08a687f7de89430f.scope: Deactivated successfully. Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.123 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.124 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.124 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.124 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.125 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.599 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.788 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.790 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12379MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.791 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.791 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:47:46 localhost systemd[1]: tmp-crun.lutCVS.mount: Deactivated successfully. Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.858 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.859 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:47:46 localhost podman[297807]: 2026-02-01 09:47:46.868561729 +0000 UTC m=+0.103867803 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, GIT_BRANCH=main, ceph=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, version=7, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Feb 1 04:47:46 localhost nova_compute[274317]: 2026-02-01 09:47:46.882 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:47:46 localhost podman[297807]: 2026-02-01 09:47:46.993737848 +0000 UTC m=+0.229043872 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, release=1764794109, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, ceph=True) Feb 1 04:47:47 localhost nova_compute[274317]: 2026-02-01 09:47:47.314 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:47:47 localhost nova_compute[274317]: 2026-02-01 09:47:47.322 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:47:47 localhost nova_compute[274317]: 2026-02-01 09:47:47.340 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:47:47 localhost nova_compute[274317]: 2026-02-01 09:47:47.342 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:47:47 localhost nova_compute[274317]: 2026-02-01 09:47:47.343 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:47:48 localhost podman[298099]: Feb 1 04:47:48 localhost podman[298099]: 2026-02-01 09:47:48.066595765 +0000 UTC m=+0.081533153 container create a7aae9f3543217d68a7cd7f3879eab8df30212d3933852bd5bc5e48c4dde950b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_faraday, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, ceph=True, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main) Feb 1 04:47:48 localhost podman[298098]: 2026-02-01 09:47:48.069398603 +0000 UTC m=+0.091607559 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127) Feb 1 04:47:48 localhost podman[298098]: 2026-02-01 09:47:48.083658149 +0000 UTC m=+0.105867105 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 1 04:47:48 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:47:48 localhost systemd[1]: Started libpod-conmon-a7aae9f3543217d68a7cd7f3879eab8df30212d3933852bd5bc5e48c4dde950b.scope. Feb 1 04:47:48 localhost podman[298099]: 2026-02-01 09:47:48.032152547 +0000 UTC m=+0.047089955 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:47:48 localhost systemd[1]: Started libcrun container. Feb 1 04:47:48 localhost podman[298099]: 2026-02-01 09:47:48.147315222 +0000 UTC m=+0.162252610 container init a7aae9f3543217d68a7cd7f3879eab8df30212d3933852bd5bc5e48c4dde950b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_faraday, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, release=1764794109, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z) Feb 1 04:47:48 localhost podman[298099]: 2026-02-01 09:47:48.15841769 +0000 UTC m=+0.173355068 container start a7aae9f3543217d68a7cd7f3879eab8df30212d3933852bd5bc5e48c4dde950b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_faraday, CEPH_POINT_RELEASE=, release=1764794109, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, maintainer=Guillaume Abrioux , distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git) Feb 1 04:47:48 localhost podman[298099]: 2026-02-01 09:47:48.159865645 +0000 UTC m=+0.174803063 container attach a7aae9f3543217d68a7cd7f3879eab8df30212d3933852bd5bc5e48c4dde950b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_faraday, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:47:48 localhost xenodochial_faraday[298163]: 167 167 Feb 1 04:47:48 localhost systemd[1]: libpod-a7aae9f3543217d68a7cd7f3879eab8df30212d3933852bd5bc5e48c4dde950b.scope: Deactivated successfully. Feb 1 04:47:48 localhost podman[298099]: 2026-02-01 09:47:48.162960302 +0000 UTC m=+0.177897700 container died a7aae9f3543217d68a7cd7f3879eab8df30212d3933852bd5bc5e48c4dde950b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_faraday, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main) Feb 1 04:47:48 localhost podman[298183]: 2026-02-01 09:47:48.260933319 +0000 UTC m=+0.085327812 container remove a7aae9f3543217d68a7cd7f3879eab8df30212d3933852bd5bc5e48c4dde950b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_faraday, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1764794109, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:47:48 localhost systemd[1]: libpod-conmon-a7aae9f3543217d68a7cd7f3879eab8df30212d3933852bd5bc5e48c4dde950b.scope: Deactivated successfully. Feb 1 04:47:48 localhost podman[298219]: Feb 1 04:47:48 localhost podman[298219]: 2026-02-01 09:47:48.374751122 +0000 UTC m=+0.076669741 container create c91d1b8551f793249a5fabc41d1193573791bf883dc0873fc31d3e8bf12b1a31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_almeida, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, release=1764794109, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True) Feb 1 04:47:48 localhost systemd[1]: Started libpod-conmon-c91d1b8551f793249a5fabc41d1193573791bf883dc0873fc31d3e8bf12b1a31.scope. Feb 1 04:47:48 localhost systemd[1]: Started libcrun container. Feb 1 04:47:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f95aabb4684321a51913ff627785e4f9ef9e2e575c9fcc7c90d932b9524c90/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Feb 1 04:47:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f95aabb4684321a51913ff627785e4f9ef9e2e575c9fcc7c90d932b9524c90/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Feb 1 04:47:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f95aabb4684321a51913ff627785e4f9ef9e2e575c9fcc7c90d932b9524c90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 04:47:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f95aabb4684321a51913ff627785e4f9ef9e2e575c9fcc7c90d932b9524c90/merged/var/lib/ceph/mon/ceph-np0005604215 supports timestamps until 2038 (0x7fffffff) Feb 1 04:47:48 localhost podman[298219]: 2026-02-01 09:47:48.34433958 +0000 UTC m=+0.046258199 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:47:48 localhost podman[298219]: 2026-02-01 09:47:48.443842196 +0000 UTC m=+0.145760825 container init c91d1b8551f793249a5fabc41d1193573791bf883dc0873fc31d3e8bf12b1a31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_almeida, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., release=1764794109) Feb 1 04:47:48 localhost podman[298219]: 2026-02-01 09:47:48.456597734 +0000 UTC m=+0.158516353 container start c91d1b8551f793249a5fabc41d1193573791bf883dc0873fc31d3e8bf12b1a31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_almeida, GIT_BRANCH=main, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=1764794109, io.openshift.expose-services=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public) Feb 1 04:47:48 localhost podman[298219]: 2026-02-01 09:47:48.456868893 +0000 UTC m=+0.158787522 container attach c91d1b8551f793249a5fabc41d1193573791bf883dc0873fc31d3e8bf12b1a31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_almeida, vcs-type=git, distribution-scope=public, release=1764794109, GIT_BRANCH=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:47:48 localhost systemd[1]: libpod-c91d1b8551f793249a5fabc41d1193573791bf883dc0873fc31d3e8bf12b1a31.scope: Deactivated successfully. Feb 1 04:47:48 localhost podman[298219]: 2026-02-01 09:47:48.54972777 +0000 UTC m=+0.251646419 container died c91d1b8551f793249a5fabc41d1193573791bf883dc0873fc31d3e8bf12b1a31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_almeida, vcs-type=git, ceph=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7) Feb 1 04:47:48 localhost podman[298315]: 2026-02-01 09:47:48.647900923 +0000 UTC m=+0.085800666 container remove c91d1b8551f793249a5fabc41d1193573791bf883dc0873fc31d3e8bf12b1a31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_almeida, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Feb 1 04:47:48 localhost systemd[1]: libpod-conmon-c91d1b8551f793249a5fabc41d1193573791bf883dc0873fc31d3e8bf12b1a31.scope: Deactivated successfully. Feb 1 04:47:48 localhost systemd[1]: Reloading. Feb 1 04:47:48 localhost systemd-rc-local-generator[298390]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:47:48 localhost systemd-sysv-generator[298395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:47:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:47:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:48 localhost systemd[1]: var-lib-containers-storage-overlay-6c185dde61299f3aeca0de7063e02a902e7d92abbfe51bc200c13c432d55b3e8-merged.mount: Deactivated successfully. Feb 1 04:47:49 localhost systemd[1]: Reloading. Feb 1 04:47:49 localhost systemd-rc-local-generator[298465]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 1 04:47:49 localhost systemd-sysv-generator[298468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 1 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 1 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 1 04:47:49 localhost nova_compute[274317]: 2026-02-01 09:47:49.340 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:49 localhost nova_compute[274317]: 2026-02-01 09:47:49.359 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:49 localhost nova_compute[274317]: 2026-02-01 09:47:49.359 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:47:49 localhost systemd[1]: Starting Ceph mon.np0005604215 for 33fac0b9-80c7-560f-918a-c92d3021ca1e... Feb 1 04:47:49 localhost podman[298568]: Feb 1 04:47:49 localhost podman[298568]: 2026-02-01 09:47:49.869743685 +0000 UTC m=+0.079870172 container create 06016fd8ed9ea17ea04edc1114fee86a5099952604beba493d4c6ae63ee431e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mon-np0005604215, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1764794109) Feb 1 04:47:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0296bb360dd52703e2f6e172cd39b61af6d54393dd30e9b797ae2b34c6c8938/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 1 04:47:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0296bb360dd52703e2f6e172cd39b61af6d54393dd30e9b797ae2b34c6c8938/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 1 04:47:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0296bb360dd52703e2f6e172cd39b61af6d54393dd30e9b797ae2b34c6c8938/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 1 04:47:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0296bb360dd52703e2f6e172cd39b61af6d54393dd30e9b797ae2b34c6c8938/merged/var/lib/ceph/mon/ceph-np0005604215 supports timestamps until 2038 (0x7fffffff) Feb 1 04:47:49 localhost podman[298568]: 2026-02-01 09:47:49.837237777 +0000 UTC m=+0.047364284 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:47:49 localhost podman[298568]: 2026-02-01 09:47:49.937009271 +0000 UTC m=+0.147135768 container init 06016fd8ed9ea17ea04edc1114fee86a5099952604beba493d4c6ae63ee431e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mon-np0005604215, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.expose-services=, release=1764794109, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main) Feb 1 04:47:49 localhost podman[298568]: 2026-02-01 09:47:49.945506447 +0000 UTC m=+0.155632944 container start 06016fd8ed9ea17ea04edc1114fee86a5099952604beba493d4c6ae63ee431e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mon-np0005604215, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, GIT_BRANCH=main) Feb 1 04:47:49 localhost bash[298568]: 06016fd8ed9ea17ea04edc1114fee86a5099952604beba493d4c6ae63ee431e1 Feb 1 04:47:49 localhost systemd[1]: Started Ceph mon.np0005604215 for 33fac0b9-80c7-560f-918a-c92d3021ca1e. Feb 1 04:47:49 localhost ceph-mon[298604]: set uid:gid to 167:167 (ceph:ceph) Feb 1 04:47:49 localhost ceph-mon[298604]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Feb 1 04:47:49 localhost ceph-mon[298604]: pidfile_write: ignore empty --pid-file Feb 1 04:47:50 localhost ceph-mon[298604]: load: jerasure load: lrc Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: RocksDB version: 7.9.2 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Git sha 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Compile date 2025-09-23 00:00:00 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: DB SUMMARY Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: DB Session ID: HRI08R8OB38WGRLS0V9F Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: CURRENT file: CURRENT Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: IDENTITY file: IDENTITY Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005604215/store.db dir, Total Num: 0, files: Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005604215/store.db: 000004.log size: 636 ; Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.error_if_exists: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.create_if_missing: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.paranoid_checks: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.env: 0x562ae5f5a9e0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.fs: PosixFileSystem Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.info_log: 0x562ae8602d20 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_file_opening_threads: 16 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.statistics: (nil) Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.use_fsync: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_log_file_size: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.log_file_time_to_roll: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.keep_log_file_num: 1000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.recycle_log_file_num: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.allow_fallocate: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.allow_mmap_reads: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.allow_mmap_writes: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.use_direct_reads: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.create_missing_column_families: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.db_log_dir: Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.wal_dir: Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.table_cache_numshardbits: 6 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.advise_random_on_open: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.db_write_buffer_size: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.write_buffer_manager: 0x562ae8613540 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.use_adaptive_mutex: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.rate_limiter: (nil) Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.wal_recovery_mode: 2 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.enable_thread_tracking: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.enable_pipelined_write: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.unordered_write: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.row_cache: None Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.wal_filter: None Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.allow_ingest_behind: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.two_write_queues: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.manual_wal_flush: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.wal_compression: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.atomic_flush: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.persist_stats_to_disk: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.log_readahead_size: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.best_efforts_recovery: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.allow_data_in_errors: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.db_host_id: __hostname__ Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.enforce_single_del_contracts: true Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_background_jobs: 2 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_background_compactions: -1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_subcompactions: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.delayed_write_rate : 16777216 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_total_wal_size: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.stats_dump_period_sec: 600 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.stats_persist_period_sec: 600 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_open_files: -1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bytes_per_sync: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_readahead_size: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_background_flushes: -1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Compression algorithms supported: Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: #011kZSTD supported: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: #011kXpressCompression supported: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: #011kBZip2Compression supported: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: #011kLZ4Compression supported: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: #011kZlibCompression supported: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: #011kSnappyCompression supported: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: DMutex implementation: pthread_mutex_t Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005604215/store.db/MANIFEST-000005 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.merge_operator: Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_filter: None Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_filter_factory: None Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.sst_partitioner_factory: None Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.memtable_factory: SkipListFactory Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.table_factory: BlockBasedTable Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562ae8602980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562ae85ff1f0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.write_buffer_size: 33554432 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_write_buffer_number: 2 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression: NoCompression Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression: Disabled Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.prefix_extractor: nullptr Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.num_levels: 7 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression_opts.window_bits: -14 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression_opts.level: 32767 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression_opts.strategy: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression_opts.enabled: false Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.target_file_size_base: 67108864 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.target_file_size_multiplier: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bytes_for_level_base: 268435456 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.arena_block_size: 1048576 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.disable_auto_compactions: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.table_properties_collectors: Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.inplace_update_support: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.memtable_huge_page_size: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.bloom_locality: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.max_successive_merges: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.paranoid_file_checks: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.force_consistency_checks: 1 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.report_bg_io_stats: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.ttl: 2592000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.enable_blob_files: false Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.min_blob_size: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.blob_file_size: 268435456 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.blob_compression_type: NoCompression Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.enable_blob_garbage_collection: false Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.blob_file_starting_level: 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005604215/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c098c70d-588d-409e-9f3c-16c3b4da1135 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939270005681, "job": 1, "event": "recovery_started", "wal_files": [4]} Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939270008160, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 648, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 526, "raw_average_value_size": 105, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939270008382, "job": 1, "event": "recovery_finished"} Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562ae8626e00 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: DB pointer 0x562ae871c000 Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:47:50 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.72 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.72 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ae85ff1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215 does not exist in monmap, will attempt to join an existing cluster Feb 1 04:47:50 localhost ceph-mon[298604]: using public_addr v2:172.18.0.105:0/0 -> [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] Feb 1 04:47:50 localhost ceph-mon[298604]: starting mon.np0005604215 rank -1 at public addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] at bind addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005604215 fsid 33fac0b9-80c7-560f-918a-c92d3021ca1e Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(???) e0 preinit fsid 33fac0b9-80c7-560f-918a-c92d3021ca1e Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(synchronizing) e14 sync_obtain_latest_monmap Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(synchronizing) e14 sync_obtain_latest_monmap obtained monmap e14 Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(synchronizing).mds e16 new map Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-01T07:59:04.480309+0000#012modified#0112026-02-01T09:39:55.510678+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01179#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26329}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26329 members: 26329#012[mds.mds.np0005604212.tkdkxt{0:26329} state up:active seq 12 addr [v2:172.18.0.106:6808/1133321306,v1:172.18.0.106:6809/1133321306] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005604215.rwvxvg{-1:16872} state up:standby seq 1 addr [v2:172.18.0.108:6808/2262553558,v1:172.18.0.108:6809/2262553558] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005604213.jdbvyh{-1:16878} state up:standby seq 1 addr [v2:172.18.0.107:6808/3323601884,v1:172.18.0.107:6809/3323601884] compat {c=[1],r=[1],i=[17ff]}] Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(synchronizing).osd e89 crush map has features 3314933000852226048, adjusting msgr requires Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(synchronizing).osd e89 crush map has features 288514051259236352, adjusting msgr requires Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(synchronizing).osd e89 crush map has features 288514051259236352, adjusting msgr requires Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(synchronizing).osd e89 crush map has features 288514051259236352, adjusting msgr requires Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604215.uhhqtv (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604215.uhhqtv on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mon.np0005604215 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mon.np0005604215 on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604212 calling monitor election Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604211.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005604215,np0005604213 Feb 1 04:47:50 localhost ceph-mon[298604]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Feb 1 04:47:50 localhost ceph-mon[298604]: stray daemon mgr.np0005604209.isqrps on host np0005604209.localdomain not managed by cephadm Feb 1 04:47:50 localhost ceph-mon[298604]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:47:50 localhost ceph-mon[298604]: stray host np0005604209.localdomain has 1 stray daemons: ['mgr.np0005604209.isqrps'] Feb 1 04:47:50 localhost ceph-mon[298604]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005604215,np0005604213 Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604212 (rank 1) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215 calling monitor election Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215 is new leader, mons np0005604215,np0005604212,np0005604213 in quorum (ranks 0,1,2) Feb 1 04:47:50 localhost ceph-mon[298604]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005604215,np0005604213) Feb 1 04:47:50 localhost ceph-mon[298604]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:47:50 localhost ceph-mon[298604]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Feb 1 04:47:50 localhost ceph-mon[298604]: stray daemon mgr.np0005604209.isqrps on host np0005604209.localdomain not managed by cephadm Feb 1 04:47:50 localhost ceph-mon[298604]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:47:50 localhost ceph-mon[298604]: stray host np0005604209.localdomain has 1 stray daemons: ['mgr.np0005604209.isqrps'] Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Removed label mon from host np0005604211.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604211.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604211.cuflqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Removed label mgr from host np0005604211.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604211.cuflqz (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604211.cuflqz on np0005604211.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604211.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring crash.np0005604211 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon crash.np0005604211 on np0005604211.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: Removed label _admin from host np0005604211.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mon.np0005604212 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.0 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mon.np0005604213 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Added label _no_schedule to host np0005604211.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005604211.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring crash.np0005604215 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon crash.np0005604215 on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.2 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.2 on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.5 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.5 on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain"}]': finished Feb 1 04:47:50 localhost ceph-mon[298604]: Removed host np0005604211.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 172.18.0.105:0/4119751104' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.26720 ' entity='mgr.np0005604211.cuflqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Activating manager daemon np0005604209.isqrps Feb 1 04:47:50 localhost ceph-mon[298604]: Manager daemon np0005604211.cuflqz is unresponsive, replacing it with standby daemon np0005604209.isqrps Feb 1 04:47:50 localhost ceph-mon[298604]: Manager daemon np0005604209.isqrps is now available Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"}]': finished Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005604211.localdomain.devices.0"}]': finished Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/mirror_snapshot_schedule"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604209.isqrps/trash_purge_schedule"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: removing stray HostCache host record np0005604211.localdomain.devices.0 Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: Saving service mon spec with placement label:mon Feb 1 04:47:50 localhost ceph-mon[298604]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 1 04:47:50 localhost ceph-mon[298604]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 1 04:47:50 localhost ceph-mon[298604]: Cluster is now healthy Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: [01/Feb/2026:09:47:27] ENGINE Bus STARTING Feb 1 04:47:50 localhost ceph-mon[298604]: [01/Feb/2026:09:47:27] ENGINE Serving on http://172.18.0.200:8765 Feb 1 04:47:50 localhost ceph-mon[298604]: [01/Feb/2026:09:47:27] ENGINE Serving on https://172.18.0.200:7150 Feb 1 04:47:50 localhost ceph-mon[298604]: [01/Feb/2026:09:47:27] ENGINE Bus STARTED Feb 1 04:47:50 localhost ceph-mon[298604]: [01/Feb/2026:09:47:27] ENGINE Client ('172.18.0.200', 32790) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:47:50 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:47:50 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:47:50 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:47:50 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:47:50 localhost ceph-mon[298604]: Remove daemons mon.np0005604215 Feb 1 04:47:50 localhost ceph-mon[298604]: Safe to remove mon.np0005604215: new quorum should be ['np0005604212', 'np0005604213'] (from ['np0005604212', 'np0005604213']) Feb 1 04:47:50 localhost ceph-mon[298604]: Removing monitor np0005604215 from monmap... Feb 1 04:47:50 localhost ceph-mon[298604]: Removing daemon mon.np0005604215 from np0005604215.localdomain -- ports [] Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604212 calling monitor election Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604213 calling monitor election Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604212 is new leader, mons np0005604212,np0005604213 in quorum (ranks 0,1) Feb 1 04:47:50 localhost ceph-mon[298604]: overall HEALTH_OK Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604212.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring crash.np0005604212 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon crash.np0005604212 on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 1 04:47:50 localhost ceph-mon[298604]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.0 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring crash.np0005604215 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon crash.np0005604215 on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.2 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.2 on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring osd.5 (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon osd.5 on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604215.uhhqtv (monmap changed)... Feb 1 04:47:50 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604215.uhhqtv on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Deploying daemon mon.np0005604215 on np0005604215.localdomain Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:50 localhost ceph-mon[298604]: mon.np0005604215@-1(synchronizing).paxosservice(auth 1..38) refresh upgraded, format 0 -> 3 Feb 1 04:47:50 localhost ceph-mgr[278126]: ms_deliver_dispatch: unhandled message 0x55d17e09e000 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Feb 1 04:47:51 localhost podman[298750]: 2026-02-01 09:47:51.034313602 +0000 UTC m=+0.094261041 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_BRANCH=main, release=1764794109, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, ceph=True, vcs-type=git) Feb 1 04:47:51 localhost podman[298750]: 2026-02-01 09:47:51.134781078 +0000 UTC m=+0.194728517 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, architecture=x86_64, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., name=rhceph, release=1764794109, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 1 04:47:52 localhost ceph-mon[298604]: mon.np0005604215@-1(probing) e15 my rank is now 2 (was -1) Feb 1 04:47:52 localhost ceph-mon[298604]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:47:52 localhost ceph-mon[298604]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Feb 1 04:47:52 localhost ceph-mon[298604]: mon.np0005604215@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:47:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:47:53 localhost podman[298873]: 2026-02-01 09:47:53.879483484 +0000 UTC m=+0.088412009 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:47:53 localhost podman[298873]: 2026-02-01 09:47:53.916121181 +0000 UTC m=+0.125049716 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:47:53 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:47:54 localhost ceph-mds[276952]: mds.beacon.mds.np0005604215.rwvxvg missed beacon ack from the monitors Feb 1 04:47:56 localhost systemd[1]: session-71.scope: Deactivated successfully. Feb 1 04:47:56 localhost systemd[1]: session-71.scope: Consumed 1.694s CPU time. Feb 1 04:47:56 localhost systemd-logind[761]: Session 71 logged out. Waiting for processes to exit. Feb 1 04:47:56 localhost systemd-logind[761]: Removed session 71. Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604212 calling monitor election Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604213 calling monitor election Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604212 is new leader, mons np0005604212,np0005604213 in quorum (ranks 0,1) Feb 1 04:47:57 localhost ceph-mon[298604]: Health check failed: 1/3 mons down, quorum np0005604212,np0005604213 (MON_DOWN) Feb 1 04:47:57 localhost ceph-mon[298604]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005604212,np0005604213 Feb 1 04:47:57 localhost ceph-mon[298604]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Feb 1 04:47:57 localhost ceph-mon[298604]: stray daemon mgr.np0005604209.isqrps on host np0005604209.localdomain not managed by cephadm Feb 1 04:47:57 localhost ceph-mon[298604]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:47:57 localhost ceph-mon[298604]: stray host np0005604209.localdomain has 1 stray daemons: ['mgr.np0005604209.isqrps'] Feb 1 04:47:57 localhost ceph-mon[298604]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005604212,np0005604213 Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604215 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum) Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:47:57 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:57 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:57 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:47:57 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:57 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:57 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:57 localhost ceph-mon[298604]: log_channel(cluster) log [INF] : mon.np0005604215 calling monitor election Feb 1 04:47:57 localhost ceph-mon[298604]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604215@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604215@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604215@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Feb 1 04:47:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 1 04:47:57 localhost ceph-mon[298604]: mgrc update_daemon_metadata mon.np0005604215 metadata {addrs=[v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005604215.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005604215.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116604,os=Linux} Feb 1 04:47:58 localhost ceph-mon[298604]: mon.np0005604215 calling monitor election Feb 1 04:47:58 localhost ceph-mon[298604]: mon.np0005604212 calling monitor election Feb 1 04:47:58 localhost ceph-mon[298604]: mon.np0005604215 calling monitor election Feb 1 04:47:58 localhost ceph-mon[298604]: mon.np0005604213 calling monitor election Feb 1 04:47:58 localhost ceph-mon[298604]: mon.np0005604212 is new leader, mons np0005604212,np0005604213,np0005604215 in quorum (ranks 0,1,2) Feb 1 04:47:58 localhost ceph-mon[298604]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005604212,np0005604213) Feb 1 04:47:58 localhost ceph-mon[298604]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:47:58 localhost ceph-mon[298604]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Feb 1 04:47:58 localhost ceph-mon[298604]: stray daemon mgr.np0005604209.isqrps on host np0005604209.localdomain not managed by cephadm Feb 1 04:47:58 localhost ceph-mon[298604]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:47:58 localhost ceph-mon[298604]: stray host np0005604209.localdomain has 1 stray daemons: ['mgr.np0005604209.isqrps'] Feb 1 04:47:59 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:59 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:47:59 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:47:59 localhost ceph-mon[298604]: Reconfiguring osd.1 (monmap changed)... Feb 1 04:47:59 localhost ceph-mon[298604]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:48:00 localhost podman[236852]: time="2026-02-01T09:48:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:48:00 localhost podman[236852]: @ - - [01/Feb/2026:09:48:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:48:00 localhost podman[236852]: @ - - [01/Feb/2026:09:48:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17780 "" "Go-http-client/1.1" Feb 1 04:48:00 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:00 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:00 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:48:00 localhost ceph-mon[298604]: Reconfiguring osd.4 (monmap changed)... Feb 1 04:48:00 localhost ceph-mon[298604]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:48:01 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:01 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:01 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604212.tkdkxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:48:01 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604212.tkdkxt (monmap changed)... Feb 1 04:48:01 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604212.tkdkxt on np0005604212.localdomain Feb 1 04:48:01 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:01 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:01 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:01 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604212.oynhpm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:48:01 localhost openstack_network_exporter[239388]: ERROR 09:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:48:01 localhost openstack_network_exporter[239388]: Feb 1 04:48:01 localhost openstack_network_exporter[239388]: ERROR 09:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:48:01 localhost openstack_network_exporter[239388]: Feb 1 04:48:02 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604212.oynhpm (monmap changed)... Feb 1 04:48:02 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604212.oynhpm on np0005604212.localdomain Feb 1 04:48:02 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:02 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:02 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604213.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:48:03 localhost ceph-mon[298604]: Reconfiguring crash.np0005604213 (monmap changed)... Feb 1 04:48:03 localhost ceph-mon[298604]: Reconfiguring daemon crash.np0005604213 on np0005604213.localdomain Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:03 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 1 04:48:04 localhost ceph-mon[298604]: Reconfig service osd.default_drive_group Feb 1 04:48:04 localhost ceph-mon[298604]: Reconfiguring osd.0 (monmap changed)... Feb 1 04:48:04 localhost ceph-mon[298604]: Reconfiguring daemon osd.0 on np0005604213.localdomain Feb 1 04:48:04 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:04 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:04 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:04 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' Feb 1 04:48:04 localhost ceph-mon[298604]: from='mgr.27041 172.18.0.200:0/1452582069' entity='mgr.np0005604209.isqrps' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:48:04 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mgr fail"} v 0) Feb 1 04:48:04 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/1659607' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:48:04 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e89 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Feb 1 04:48:04 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e89 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Feb 1 04:48:04 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 e90: 6 total, 6 up, 6 in Feb 1 04:48:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:48:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:48:04 localhost systemd[1]: session-73.scope: Deactivated successfully. Feb 1 04:48:04 localhost systemd[1]: session-73.scope: Consumed 18.359s CPU time. Feb 1 04:48:04 localhost systemd-logind[761]: Session 73 logged out. Waiting for processes to exit. Feb 1 04:48:04 localhost systemd-logind[761]: Removed session 73. Feb 1 04:48:04 localhost podman[299234]: 2026-02-01 09:48:04.742181143 +0000 UTC m=+0.086723276 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS) Feb 1 04:48:04 localhost podman[299234]: 2026-02-01 09:48:04.783649901 +0000 UTC m=+0.128192044 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 1 04:48:04 localhost systemd[1]: tmp-crun.qlTGh4.mount: Deactivated successfully. Feb 1 04:48:04 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:48:04 localhost podman[299235]: 2026-02-01 09:48:04.806640951 +0000 UTC m=+0.146884930 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:48:04 localhost podman[299235]: 2026-02-01 09:48:04.84302458 +0000 UTC m=+0.183268559 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:48:04 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:48:04 localhost sshd[299283]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:48:05 localhost systemd-logind[761]: New session 74 of user ceph-admin. Feb 1 04:48:05 localhost systemd[1]: Started Session 74 of User ceph-admin. Feb 1 04:48:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1019510607 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:05 localhost ceph-mon[298604]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:48:05 localhost ceph-mon[298604]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:48:05 localhost ceph-mon[298604]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:48:05 localhost ceph-mon[298604]: Activating manager daemon np0005604213.caiaeh Feb 1 04:48:05 localhost ceph-mon[298604]: from='client.? 172.18.0.200:0/1659607' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:48:05 localhost ceph-mon[298604]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 1 04:48:05 localhost ceph-mon[298604]: Manager daemon np0005604213.caiaeh is now available Feb 1 04:48:05 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604213.caiaeh/mirror_snapshot_schedule"} : dispatch Feb 1 04:48:05 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604213.caiaeh/trash_purge_schedule"} : dispatch Feb 1 04:48:06 localhost podman[299392]: 2026-02-01 09:48:06.062618991 +0000 UTC m=+0.102611324 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, distribution-scope=public, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, vendor=Red Hat, Inc., release=1764794109, GIT_BRANCH=main) Feb 1 04:48:06 localhost podman[299392]: 2026-02-01 09:48:06.258858224 +0000 UTC m=+0.298850567 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, release=1764794109, GIT_BRANCH=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:48:06 localhost ceph-mon[298604]: [01/Feb/2026:09:48:05] ENGINE Bus STARTING Feb 1 04:48:06 localhost ceph-mon[298604]: [01/Feb/2026:09:48:06] ENGINE Serving on https://172.18.0.107:7150 Feb 1 04:48:06 localhost ceph-mon[298604]: [01/Feb/2026:09:48:06] ENGINE Client ('172.18.0.107', 42754) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:48:06 localhost ceph-mon[298604]: [01/Feb/2026:09:48:06] ENGINE Serving on http://172.18.0.107:8765 Feb 1 04:48:06 localhost ceph-mon[298604]: [01/Feb/2026:09:48:06] ENGINE Bus STARTED Feb 1 04:48:06 localhost systemd[1]: Stopping User Manager for UID 1003... Feb 1 04:48:06 localhost systemd[295356]: Activating special unit Exit the Session... Feb 1 04:48:06 localhost systemd[295356]: Stopped target Main User Target. Feb 1 04:48:06 localhost systemd[295356]: Stopped target Basic System. Feb 1 04:48:06 localhost systemd[295356]: Stopped target Paths. Feb 1 04:48:06 localhost systemd[295356]: Stopped target Sockets. Feb 1 04:48:06 localhost systemd[295356]: Stopped target Timers. Feb 1 04:48:06 localhost systemd[295356]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 1 04:48:06 localhost systemd[295356]: Stopped Daily Cleanup of User's Temporary Directories. Feb 1 04:48:06 localhost systemd[295356]: Closed D-Bus User Message Bus Socket. Feb 1 04:48:06 localhost systemd[295356]: Stopped Create User's Volatile Files and Directories. Feb 1 04:48:06 localhost systemd[295356]: Removed slice User Application Slice. Feb 1 04:48:06 localhost systemd[295356]: Reached target Shutdown. Feb 1 04:48:06 localhost systemd[295356]: Finished Exit the Session. Feb 1 04:48:06 localhost systemd[295356]: Reached target Exit the Session. Feb 1 04:48:06 localhost systemd[1]: user@1003.service: Deactivated successfully. Feb 1 04:48:06 localhost systemd[1]: Stopped User Manager for UID 1003. Feb 1 04:48:06 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Feb 1 04:48:06 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Feb 1 04:48:06 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Feb 1 04:48:06 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Feb 1 04:48:06 localhost systemd[1]: Removed slice User Slice of UID 1003. Feb 1 04:48:06 localhost systemd[1]: user-1003.slice: Consumed 2.151s CPU time. Feb 1 04:48:07 localhost ceph-mon[298604]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 1 04:48:07 localhost ceph-mon[298604]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 1 04:48:07 localhost ceph-mon[298604]: Cluster is now healthy Feb 1 04:48:07 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:07 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:07 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:07 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:07 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:07 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:48:09 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:48:09 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:48:09 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:48:09 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:48:09 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:48:09 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:48:09 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:48:09 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:48:09 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:48:09 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:48:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:48:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020040841 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:48:10 localhost podman[299990]: 2026-02-01 09:48:10.062428249 +0000 UTC m=+0.088621346 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.7, architecture=x86_64, build-date=2026-01-22T05:09:47Z, vcs-type=git, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, release=1769056855, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9) Feb 1 04:48:10 localhost podman[299990]: 2026-02-01 09:48:10.107779909 +0000 UTC m=+0.133973046 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, release=1769056855, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Feb 1 04:48:10 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:48:10 localhost podman[300031]: 2026-02-01 09:48:10.21197109 +0000 UTC m=+0.139732085 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:48:10 localhost podman[300031]: 2026-02-01 09:48:10.216773411 +0000 UTC m=+0.144534376 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS) Feb 1 04:48:10 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:48:10 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:48:10 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:48:10 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:48:11 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:48:11 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:48:11 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:48:11 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:48:11 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:48:11 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:11 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:11 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:11 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:11 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:11 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:11 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:12 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:48:12 localhost ceph-mon[298604]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 1 04:48:12 localhost ceph-mon[298604]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 1 04:48:12 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 1 04:48:13 localhost ceph-mon[298604]: Reconfiguring daemon osd.1 on np0005604212.localdomain Feb 1 04:48:13 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:13 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:13 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:13 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:13 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 1 04:48:14 localhost ceph-mon[298604]: Reconfiguring daemon osd.4 on np0005604212.localdomain Feb 1 04:48:14 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:14 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:14 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:14 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:14 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 1 04:48:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054374 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:15 localhost ceph-mon[298604]: Reconfiguring osd.3 (monmap changed)... Feb 1 04:48:15 localhost ceph-mon[298604]: Reconfiguring daemon osd.3 on np0005604213.localdomain Feb 1 04:48:15 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:15 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:15 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:15 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:15 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:15 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604213.jdbvyh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:48:16 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604213.jdbvyh (monmap changed)... Feb 1 04:48:16 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604213.jdbvyh on np0005604213.localdomain Feb 1 04:48:16 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:16 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:16 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604213.caiaeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:48:17 localhost podman[300384]: Feb 1 04:48:17 localhost podman[300384]: 2026-02-01 09:48:17.203683264 +0000 UTC m=+0.074620377 container create 306a98355de5cfa494f60fdb9710420ff4bd256b373a9614be6ccaf418c295f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_lamarr, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git) Feb 1 04:48:17 localhost systemd[1]: Started libpod-conmon-306a98355de5cfa494f60fdb9710420ff4bd256b373a9614be6ccaf418c295f8.scope. Feb 1 04:48:17 localhost podman[300384]: 2026-02-01 09:48:17.171745495 +0000 UTC m=+0.042682628 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:48:17 localhost systemd[1]: Started libcrun container. Feb 1 04:48:17 localhost podman[300384]: 2026-02-01 09:48:17.296801369 +0000 UTC m=+0.167738482 container init 306a98355de5cfa494f60fdb9710420ff4bd256b373a9614be6ccaf418c295f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_lamarr, architecture=x86_64, distribution-scope=public, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., version=7, build-date=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True) Feb 1 04:48:17 localhost podman[300384]: 2026-02-01 09:48:17.306771722 +0000 UTC m=+0.177708855 container start 306a98355de5cfa494f60fdb9710420ff4bd256b373a9614be6ccaf418c295f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_lamarr, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:48:17 localhost podman[300384]: 2026-02-01 09:48:17.307099412 +0000 UTC m=+0.178036565 container attach 306a98355de5cfa494f60fdb9710420ff4bd256b373a9614be6ccaf418c295f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_lamarr, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, name=rhceph, version=7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, RELEASE=main, release=1764794109, CEPH_POINT_RELEASE=) Feb 1 04:48:17 localhost naughty_lamarr[300399]: 167 167 Feb 1 04:48:17 localhost systemd[1]: libpod-306a98355de5cfa494f60fdb9710420ff4bd256b373a9614be6ccaf418c295f8.scope: Deactivated successfully. Feb 1 04:48:17 localhost podman[300384]: 2026-02-01 09:48:17.310363014 +0000 UTC m=+0.181300137 container died 306a98355de5cfa494f60fdb9710420ff4bd256b373a9614be6ccaf418c295f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_lamarr, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, version=7, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, release=1764794109, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7) Feb 1 04:48:17 localhost podman[300404]: 2026-02-01 09:48:17.404704978 +0000 UTC m=+0.081438311 container remove 306a98355de5cfa494f60fdb9710420ff4bd256b373a9614be6ccaf418c295f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_lamarr, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, build-date=2025-12-08T17:28:53Z, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64) Feb 1 04:48:17 localhost systemd[1]: libpod-conmon-306a98355de5cfa494f60fdb9710420ff4bd256b373a9614be6ccaf418c295f8.scope: Deactivated successfully. Feb 1 04:48:17 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604213.caiaeh (monmap changed)... Feb 1 04:48:17 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604213.caiaeh on np0005604213.localdomain Feb 1 04:48:17 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:17 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:17 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005604215.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 1 04:48:18 localhost podman[300474]: Feb 1 04:48:18 localhost podman[300474]: 2026-02-01 09:48:18.110813303 +0000 UTC m=+0.075536636 container create 9792d942921930fcd1de432405f1a21fa2b2a02635540e3ad19746a59e94a022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_bose, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-12-08T17:28:53Z, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:48:18 localhost systemd[1]: Started libpod-conmon-9792d942921930fcd1de432405f1a21fa2b2a02635540e3ad19746a59e94a022.scope. Feb 1 04:48:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:48:18 localhost systemd[1]: Started libcrun container. Feb 1 04:48:18 localhost podman[300474]: 2026-02-01 09:48:18.174825417 +0000 UTC m=+0.139548760 container init 9792d942921930fcd1de432405f1a21fa2b2a02635540e3ad19746a59e94a022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_bose, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, version=7, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, release=1764794109, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux ) Feb 1 04:48:18 localhost podman[300474]: 2026-02-01 09:48:18.08103324 +0000 UTC m=+0.045756613 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:48:18 localhost podman[300474]: 2026-02-01 09:48:18.184961614 +0000 UTC m=+0.149684947 container start 9792d942921930fcd1de432405f1a21fa2b2a02635540e3ad19746a59e94a022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_bose, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, release=1764794109, maintainer=Guillaume Abrioux , vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Feb 1 04:48:18 localhost beautiful_bose[300490]: 167 167 Feb 1 04:48:18 localhost podman[300474]: 2026-02-01 09:48:18.185603714 +0000 UTC m=+0.150327087 container attach 9792d942921930fcd1de432405f1a21fa2b2a02635540e3ad19746a59e94a022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_bose, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, release=1764794109, io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:48:18 localhost systemd[1]: libpod-9792d942921930fcd1de432405f1a21fa2b2a02635540e3ad19746a59e94a022.scope: Deactivated successfully. Feb 1 04:48:18 localhost podman[300474]: 2026-02-01 09:48:18.19345358 +0000 UTC m=+0.158176933 container died 9792d942921930fcd1de432405f1a21fa2b2a02635540e3ad19746a59e94a022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_bose, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, version=7) Feb 1 04:48:18 localhost systemd[1]: var-lib-containers-storage-overlay-a606d324a3b8ca9b6901dafddc6753064925ed9a055ed0d5964d48ea316e641b-merged.mount: Deactivated successfully. Feb 1 04:48:18 localhost systemd[1]: tmp-crun.7KKSOe.mount: Deactivated successfully. Feb 1 04:48:18 localhost podman[300491]: 2026-02-01 09:48:18.274490347 +0000 UTC m=+0.109614743 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:48:18 localhost podman[300503]: 2026-02-01 09:48:18.305672013 +0000 UTC m=+0.099909588 container remove 9792d942921930fcd1de432405f1a21fa2b2a02635540e3ad19746a59e94a022 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_bose, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1764794109, architecture=x86_64, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., version=7, GIT_BRANCH=main, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-type=git) Feb 1 04:48:18 localhost systemd[1]: libpod-conmon-9792d942921930fcd1de432405f1a21fa2b2a02635540e3ad19746a59e94a022.scope: Deactivated successfully. Feb 1 04:48:18 localhost podman[300491]: 2026-02-01 09:48:18.338766339 +0000 UTC m=+0.173890745 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:48:18 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:48:18 localhost ceph-mon[298604]: Reconfiguring crash.np0005604215 (monmap changed)... Feb 1 04:48:18 localhost ceph-mon[298604]: Reconfiguring daemon crash.np0005604215 on np0005604215.localdomain Feb 1 04:48:18 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:18 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:18 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 1 04:48:18 localhost ceph-mon[298604]: Reconfiguring osd.2 (monmap changed)... Feb 1 04:48:18 localhost ceph-mon[298604]: Reconfiguring daemon osd.2 on np0005604215.localdomain Feb 1 04:48:18 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:19 localhost podman[300587]: Feb 1 04:48:19 localhost podman[300587]: 2026-02-01 09:48:19.196612315 +0000 UTC m=+0.078560471 container create 0cd53eb72945342f0bfe76643c49e5bdd455d8abff6f681e5a76c8cd171aaab7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_gagarin, name=rhceph, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Feb 1 04:48:19 localhost systemd[1]: tmp-crun.7MnEao.mount: Deactivated successfully. Feb 1 04:48:19 localhost systemd[1]: var-lib-containers-storage-overlay-0be70610547c819dc755855de0deb5cf79d8e6abe7149521be6fb591b2e205a3-merged.mount: Deactivated successfully. Feb 1 04:48:19 localhost systemd[1]: Started libpod-conmon-0cd53eb72945342f0bfe76643c49e5bdd455d8abff6f681e5a76c8cd171aaab7.scope. Feb 1 04:48:19 localhost systemd[1]: Started libcrun container. Feb 1 04:48:19 localhost podman[300587]: 2026-02-01 09:48:19.262410415 +0000 UTC m=+0.144358581 container init 0cd53eb72945342f0bfe76643c49e5bdd455d8abff6f681e5a76c8cd171aaab7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_gagarin, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., version=7, name=rhceph, distribution-scope=public, GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, build-date=2025-12-08T17:28:53Z) Feb 1 04:48:19 localhost podman[300587]: 2026-02-01 09:48:19.167155593 +0000 UTC m=+0.049103769 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:48:19 localhost podman[300587]: 2026-02-01 09:48:19.274552155 +0000 UTC m=+0.156500311 container start 0cd53eb72945342f0bfe76643c49e5bdd455d8abff6f681e5a76c8cd171aaab7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_gagarin, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.) Feb 1 04:48:19 localhost podman[300587]: 2026-02-01 09:48:19.275013599 +0000 UTC m=+0.156961795 container attach 0cd53eb72945342f0bfe76643c49e5bdd455d8abff6f681e5a76c8cd171aaab7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_gagarin, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-12-08T17:28:53Z, release=1764794109, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Feb 1 04:48:19 localhost adoring_gagarin[300602]: 167 167 Feb 1 04:48:19 localhost systemd[1]: libpod-0cd53eb72945342f0bfe76643c49e5bdd455d8abff6f681e5a76c8cd171aaab7.scope: Deactivated successfully. Feb 1 04:48:19 localhost podman[300587]: 2026-02-01 09:48:19.277782276 +0000 UTC m=+0.159730432 container died 0cd53eb72945342f0bfe76643c49e5bdd455d8abff6f681e5a76c8cd171aaab7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_gagarin, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:48:19 localhost podman[300607]: 2026-02-01 09:48:19.375206776 +0000 UTC m=+0.087081497 container remove 0cd53eb72945342f0bfe76643c49e5bdd455d8abff6f681e5a76c8cd171aaab7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_gagarin, name=rhceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True) Feb 1 04:48:19 localhost systemd[1]: libpod-conmon-0cd53eb72945342f0bfe76643c49e5bdd455d8abff6f681e5a76c8cd171aaab7.scope: Deactivated successfully. Feb 1 04:48:19 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:19 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:19 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:19 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 1 04:48:19 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054722 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:20 localhost systemd[1]: var-lib-containers-storage-overlay-e6d37cf46cf6759a7f6915b3c7b149c563d58c54b621d325e6edad63496636b9-merged.mount: Deactivated successfully. Feb 1 04:48:20 localhost podman[300683]: Feb 1 04:48:20 localhost podman[300683]: 2026-02-01 09:48:20.236122717 +0000 UTC m=+0.067660609 container create cec86a538359b94622bf82feb9cec310c51d13a1624cbd9cb049bce09a092f0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_dubinsky, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, release=1764794109, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , RELEASE=main, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, build-date=2025-12-08T17:28:53Z) Feb 1 04:48:20 localhost systemd[1]: Started libpod-conmon-cec86a538359b94622bf82feb9cec310c51d13a1624cbd9cb049bce09a092f0e.scope. Feb 1 04:48:20 localhost systemd[1]: Started libcrun container. Feb 1 04:48:20 localhost podman[300683]: 2026-02-01 09:48:20.298327006 +0000 UTC m=+0.129864918 container init cec86a538359b94622bf82feb9cec310c51d13a1624cbd9cb049bce09a092f0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_dubinsky, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-12-08T17:28:53Z, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1764794109, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:48:20 localhost podman[300683]: 2026-02-01 09:48:20.202325809 +0000 UTC m=+0.033863771 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:48:20 localhost podman[300683]: 2026-02-01 09:48:20.314281494 +0000 UTC m=+0.145819416 container start cec86a538359b94622bf82feb9cec310c51d13a1624cbd9cb049bce09a092f0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_dubinsky, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , version=7, io.buildah.version=1.41.4, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, build-date=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, ceph=True) Feb 1 04:48:20 localhost podman[300683]: 2026-02-01 09:48:20.314566333 +0000 UTC m=+0.146104255 container attach cec86a538359b94622bf82feb9cec310c51d13a1624cbd9cb049bce09a092f0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_dubinsky, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, GIT_BRANCH=main, vcs-type=git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Feb 1 04:48:20 localhost festive_dubinsky[300698]: 167 167 Feb 1 04:48:20 localhost systemd[1]: libpod-cec86a538359b94622bf82feb9cec310c51d13a1624cbd9cb049bce09a092f0e.scope: Deactivated successfully. Feb 1 04:48:20 localhost podman[300683]: 2026-02-01 09:48:20.318530358 +0000 UTC m=+0.150068300 container died cec86a538359b94622bf82feb9cec310c51d13a1624cbd9cb049bce09a092f0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_dubinsky, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, release=1764794109, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True) Feb 1 04:48:20 localhost podman[300703]: 2026-02-01 09:48:20.419609382 +0000 UTC m=+0.085705324 container remove cec86a538359b94622bf82feb9cec310c51d13a1624cbd9cb049bce09a092f0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_dubinsky, build-date=2025-12-08T17:28:53Z, org.opencontainers.image.created=2025-12-08T17:28:53Z, vcs-type=git, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.expose-services=) Feb 1 04:48:20 localhost systemd[1]: libpod-conmon-cec86a538359b94622bf82feb9cec310c51d13a1624cbd9cb049bce09a092f0e.scope: Deactivated successfully. Feb 1 04:48:20 localhost ceph-mon[298604]: Reconfiguring osd.5 (monmap changed)... Feb 1 04:48:20 localhost ceph-mon[298604]: Reconfiguring daemon osd.5 on np0005604215.localdomain Feb 1 04:48:20 localhost ceph-mon[298604]: Saving service mon spec with placement label:mon Feb 1 04:48:20 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:20 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:20 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:20 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:20 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005604215.rwvxvg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 1 04:48:20 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:20 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:20 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005604215.uhhqtv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 1 04:48:21 localhost podman[300775]: Feb 1 04:48:21 localhost podman[300775]: 2026-02-01 09:48:21.115874909 +0000 UTC m=+0.074317747 container create a73e8520acd295b7a509ea7c68475ec50ebf1429835ef79eb2b1c607dcb108c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_torvalds, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_CLEAN=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1764794109, version=7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z) Feb 1 04:48:21 localhost systemd[1]: Started libpod-conmon-a73e8520acd295b7a509ea7c68475ec50ebf1429835ef79eb2b1c607dcb108c3.scope. Feb 1 04:48:21 localhost systemd[1]: Started libcrun container. Feb 1 04:48:21 localhost podman[300775]: 2026-02-01 09:48:21.176671623 +0000 UTC m=+0.135114471 container init a73e8520acd295b7a509ea7c68475ec50ebf1429835ef79eb2b1c607dcb108c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_torvalds, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, release=1764794109, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main) Feb 1 04:48:21 localhost podman[300775]: 2026-02-01 09:48:21.08585833 +0000 UTC m=+0.044301218 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:48:21 localhost podman[300775]: 2026-02-01 09:48:21.187039367 +0000 UTC m=+0.145482195 container start a73e8520acd295b7a509ea7c68475ec50ebf1429835ef79eb2b1c607dcb108c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_torvalds, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-12-08T17:28:53Z, GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git) Feb 1 04:48:21 localhost podman[300775]: 2026-02-01 09:48:21.187276775 +0000 UTC m=+0.145719603 container attach a73e8520acd295b7a509ea7c68475ec50ebf1429835ef79eb2b1c607dcb108c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_torvalds, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-12-08T17:28:53Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1764794109, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main) Feb 1 04:48:21 localhost flamboyant_torvalds[300790]: 167 167 Feb 1 04:48:21 localhost systemd[1]: libpod-a73e8520acd295b7a509ea7c68475ec50ebf1429835ef79eb2b1c607dcb108c3.scope: Deactivated successfully. Feb 1 04:48:21 localhost podman[300775]: 2026-02-01 09:48:21.190847106 +0000 UTC m=+0.149289964 container died a73e8520acd295b7a509ea7c68475ec50ebf1429835ef79eb2b1c607dcb108c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_torvalds, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, name=rhceph, distribution-scope=public, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.created=2025-12-08T17:28:53Z, build-date=2025-12-08T17:28:53Z, version=7, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.207741) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939301207858, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 13135, "num_deletes": 262, "total_data_size": 27193912, "memory_usage": 28596848, "flush_reason": "Manual Compaction"} Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Feb 1 04:48:21 localhost systemd[1]: tmp-crun.QxktcG.mount: Deactivated successfully. Feb 1 04:48:21 localhost systemd[1]: var-lib-containers-storage-overlay-e455adc3f6d13a9d9194ee1092d8de849e55318b22d3465b1f2b600ad4c0b1f8-merged.mount: Deactivated successfully. Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939301290864, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 23674382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 13140, "table_properties": {"data_size": 23605400, "index_size": 37708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30213, "raw_key_size": 322843, "raw_average_key_size": 26, "raw_value_size": 23399734, "raw_average_value_size": 1938, "num_data_blocks": 1428, "num_entries": 12072, "num_filter_entries": 12072, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 1769939270, "file_creation_time": 1769939301, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 83168 microseconds, and 39631 cpu microseconds. Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.290923) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 23674382 bytes OK Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.290947) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.293228) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.293252) EVENT_LOG_v1 {"time_micros": 1769939301293241, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.293271) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 27105185, prev total WAL file size 27105185, number of live WAL files 2. Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.296419) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353131' seq:72057594037927935, type:22 .. '6B760031373637' seq:0, type:0; will stop at (end) Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(22MB) 8(1762B)] Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939301296517, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 23676144, "oldest_snapshot_seqno": -1} Feb 1 04:48:21 localhost systemd[1]: var-lib-containers-storage-overlay-59874579fceaaf9680656a20fe5dd803ff018810c1c1a3e307e224656e8951e7-merged.mount: Deactivated successfully. Feb 1 04:48:21 localhost podman[300795]: 2026-02-01 09:48:21.312198956 +0000 UTC m=+0.108054654 container remove a73e8520acd295b7a509ea7c68475ec50ebf1429835ef79eb2b1c607dcb108c3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_torvalds, release=1764794109, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, io.buildah.version=1.41.4, vcs-type=git) Feb 1 04:48:21 localhost systemd[1]: libpod-conmon-a73e8520acd295b7a509ea7c68475ec50ebf1429835ef79eb2b1c607dcb108c3.scope: Deactivated successfully. Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 11818 keys, 23670868 bytes, temperature: kUnknown Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939301417926, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 23670868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 23602560, "index_size": 37679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29573, "raw_key_size": 319018, "raw_average_key_size": 26, "raw_value_size": 23400139, "raw_average_value_size": 1980, "num_data_blocks": 1427, "num_entries": 11818, "num_filter_entries": 11818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769939301, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.420619) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 23670868 bytes Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.422205) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.0 rd, 192.9 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(22.6, 0.0 +0.0 blob) out(22.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 12077, records dropped: 259 output_compression: NoCompression Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.422262) EVENT_LOG_v1 {"time_micros": 1769939301422247, "job": 4, "event": "compaction_finished", "compaction_time_micros": 122703, "compaction_time_cpu_micros": 42227, "output_level": 6, "num_output_files": 1, "total_output_size": 23670868, "num_input_records": 12077, "num_output_records": 11818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939301425721, "job": 4, "event": "table_file_deletion", "file_number": 14} Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939301425780, "job": 4, "event": "table_file_deletion", "file_number": 8} Feb 1 04:48:21 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:48:21.296340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:48:21 localhost ceph-mon[298604]: Reconfiguring mds.mds.np0005604215.rwvxvg (monmap changed)... Feb 1 04:48:21 localhost ceph-mon[298604]: Reconfiguring daemon mds.mds.np0005604215.rwvxvg on np0005604215.localdomain Feb 1 04:48:21 localhost ceph-mon[298604]: Reconfiguring mgr.np0005604215.uhhqtv (monmap changed)... Feb 1 04:48:21 localhost ceph-mon[298604]: Reconfiguring daemon mgr.np0005604215.uhhqtv on np0005604215.localdomain Feb 1 04:48:21 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:21 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:21 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:48:22 localhost podman[300864]: Feb 1 04:48:22 localhost podman[300864]: 2026-02-01 09:48:22.083007217 +0000 UTC m=+0.080220183 container create 3e4c3f6b5aefe3660de5630a94b9c24ecdc867dfed1e86dfe3c6f5a68a884d27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, version=7, RELEASE=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, build-date=2025-12-08T17:28:53Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:48:22 localhost systemd[1]: Started libpod-conmon-3e4c3f6b5aefe3660de5630a94b9c24ecdc867dfed1e86dfe3c6f5a68a884d27.scope. Feb 1 04:48:22 localhost systemd[1]: Started libcrun container. Feb 1 04:48:22 localhost podman[300864]: 2026-02-01 09:48:22.143590753 +0000 UTC m=+0.140803709 container init 3e4c3f6b5aefe3660de5630a94b9c24ecdc867dfed1e86dfe3c6f5a68a884d27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.created=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., release=1764794109, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, version=7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:48:22 localhost podman[300864]: 2026-02-01 09:48:22.052507382 +0000 UTC m=+0.049720378 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 1 04:48:22 localhost podman[300864]: 2026-02-01 09:48:22.152473981 +0000 UTC m=+0.149686937 container start 3e4c3f6b5aefe3660de5630a94b9c24ecdc867dfed1e86dfe3c6f5a68a884d27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-12-08T17:28:53Z, version=7, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, io.buildah.version=1.41.4, RELEASE=main, GIT_CLEAN=True, release=1764794109, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, vcs-type=git, architecture=x86_64, io.openshift.expose-services=) Feb 1 04:48:22 localhost sleepy_vaughan[300879]: 167 167 Feb 1 04:48:22 localhost podman[300864]: 2026-02-01 09:48:22.152673748 +0000 UTC m=+0.149886704 container attach 3e4c3f6b5aefe3660de5630a94b9c24ecdc867dfed1e86dfe3c6f5a68a884d27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, version=7, GIT_CLEAN=True, vcs-type=git, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public) Feb 1 04:48:22 localhost systemd[1]: libpod-3e4c3f6b5aefe3660de5630a94b9c24ecdc867dfed1e86dfe3c6f5a68a884d27.scope: Deactivated successfully. Feb 1 04:48:22 localhost podman[300864]: 2026-02-01 09:48:22.155181085 +0000 UTC m=+0.152394051 container died 3e4c3f6b5aefe3660de5630a94b9c24ecdc867dfed1e86dfe3c6f5a68a884d27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, version=7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-12-08T17:28:53Z, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, org.opencontainers.image.created=2025-12-08T17:28:53Z, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 1 04:48:22 localhost systemd[1]: tmp-crun.yTJzvC.mount: Deactivated successfully. Feb 1 04:48:22 localhost systemd[1]: var-lib-containers-storage-overlay-b37d8b5f095d75d8634c221dfda891575ed2b7e00de3f80dadf8a6761730af67-merged.mount: Deactivated successfully. Feb 1 04:48:22 localhost podman[300885]: 2026-02-01 09:48:22.251510522 +0000 UTC m=+0.086044255 container remove 3e4c3f6b5aefe3660de5630a94b9c24ecdc867dfed1e86dfe3c6f5a68a884d27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2025-12-08T17:28:53Z, ceph=True, com.redhat.component=rhceph-container, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1764794109, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph) Feb 1 04:48:22 localhost systemd[1]: libpod-conmon-3e4c3f6b5aefe3660de5630a94b9c24ecdc867dfed1e86dfe3c6f5a68a884d27.scope: Deactivated successfully. Feb 1 04:48:22 localhost ceph-mon[298604]: Reconfiguring mon.np0005604215 (monmap changed)... Feb 1 04:48:22 localhost ceph-mon[298604]: Reconfiguring daemon mon.np0005604215 on np0005604215.localdomain Feb 1 04:48:22 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:22 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:22 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:48:22 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:22 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:23 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:48:23 localhost ceph-mon[298604]: Reconfiguring mon.np0005604212 (monmap changed)... Feb 1 04:48:23 localhost ceph-mon[298604]: Reconfiguring daemon mon.np0005604212 on np0005604212.localdomain Feb 1 04:48:24 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:24 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:24 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 1 04:48:24 localhost ceph-mon[298604]: Reconfiguring mon.np0005604213 (monmap changed)... Feb 1 04:48:24 localhost ceph-mon[298604]: Reconfiguring daemon mon.np0005604213 on np0005604213.localdomain Feb 1 04:48:24 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:24 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:48:24 localhost podman[300920]: 2026-02-01 09:48:24.90915203 +0000 UTC m=+0.125170879 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:48:24 localhost podman[300920]: 2026-02-01 09:48:24.94812201 +0000 UTC m=+0.164140869 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:48:24 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:48:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:25 localhost ceph-mon[298604]: from='mgr.34373 172.18.0.107:0/1411989763' entity='mgr.np0005604213.caiaeh' Feb 1 04:48:30 localhost podman[236852]: time="2026-02-01T09:48:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:48:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:30 localhost podman[236852]: @ - - [01/Feb/2026:09:48:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:48:30 localhost podman[236852]: @ - - [01/Feb/2026:09:48:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17788 "" "Go-http-client/1.1" Feb 1 04:48:31 localhost openstack_network_exporter[239388]: ERROR 09:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:48:31 localhost openstack_network_exporter[239388]: Feb 1 04:48:31 localhost openstack_network_exporter[239388]: ERROR 09:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:48:31 localhost openstack_network_exporter[239388]: Feb 1 04:48:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:48:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/940957761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:48:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:48:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/940957761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:48:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:48:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:48:35 localhost systemd[1]: tmp-crun.3XTgs7.mount: Deactivated successfully. Feb 1 04:48:35 localhost podman[300945]: 2026-02-01 09:48:35.8773622 +0000 UTC m=+0.092483176 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller) Feb 1 04:48:35 localhost podman[300946]: 2026-02-01 09:48:35.949501759 +0000 UTC m=+0.161768926 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:48:35 localhost podman[300946]: 2026-02-01 09:48:35.956983622 +0000 UTC m=+0.169250789 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:48:35 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:48:35 localhost podman[300945]: 2026-02-01 09:48:35.978911049 +0000 UTC m=+0.194032055 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:48:35 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:48:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:48:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:48:40 localhost podman[300992]: 2026-02-01 09:48:40.863558439 +0000 UTC m=+0.077409474 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, release=1769056855, build-date=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 1 04:48:40 localhost podman[300992]: 2026-02-01 09:48:40.875350148 +0000 UTC m=+0.089201183 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, distribution-scope=public, release=1769056855, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z) Feb 1 04:48:40 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:48:40 localhost systemd[1]: tmp-crun.3KUxVz.mount: Deactivated successfully. Feb 1 04:48:40 localhost podman[300993]: 2026-02-01 09:48:40.975146822 +0000 UTC m=+0.185031293 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:48:41 localhost podman[300993]: 2026-02-01 09:48:41.008890169 +0000 UTC m=+0.218774650 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Feb 1 04:48:41 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:48:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:48:41.765 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:48:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:48:41.765 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:48:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:48:41.766 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:48:42 localhost nova_compute[274317]: 2026-02-01 09:48:42.104 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:48:42 localhost nova_compute[274317]: 2026-02-01 09:48:42.105 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:48:42 localhost nova_compute[274317]: 2026-02-01 09:48:42.105 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:48:42 localhost nova_compute[274317]: 2026-02-01 09:48:42.189 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:48:44 localhost nova_compute[274317]: 2026-02-01 09:48:44.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:48:44 localhost nova_compute[274317]: 2026-02-01 09:48:44.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:48:44 localhost nova_compute[274317]: 2026-02-01 09:48:44.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:48:44 localhost nova_compute[274317]: 2026-02-01 09:48:44.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:48:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:45 localhost nova_compute[274317]: 2026-02-01 09:48:45.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:48:46 localhost nova_compute[274317]: 2026-02-01 09:48:46.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.122 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.122 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.122 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.123 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.123 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:48:47 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:48:47 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1928219287' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.564 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.800 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.802 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12422MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.802 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.803 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.877 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.877 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:48:47 localhost nova_compute[274317]: 2026-02-01 09:48:47.897 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:48:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:48:48 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4292011110' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:48:48 localhost nova_compute[274317]: 2026-02-01 09:48:48.345 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:48:48 localhost nova_compute[274317]: 2026-02-01 09:48:48.351 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:48:48 localhost nova_compute[274317]: 2026-02-01 09:48:48.378 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:48:48 localhost nova_compute[274317]: 2026-02-01 09:48:48.381 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:48:48 localhost nova_compute[274317]: 2026-02-01 09:48:48.381 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:48:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:48:48 localhost podman[301074]: 2026-02-01 09:48:48.868397278 +0000 UTC m=+0.083208236 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:48:48 localhost podman[301074]: 2026-02-01 09:48:48.879598289 +0000 UTC m=+0.094409237 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Feb 1 04:48:48 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:48:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:51 localhost nova_compute[274317]: 2026-02-01 09:48:51.384 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:48:51 localhost nova_compute[274317]: 2026-02-01 09:48:51.385 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:48:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:48:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:48:55 localhost podman[301094]: 2026-02-01 09:48:55.859325977 +0000 UTC m=+0.075165194 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:48:55 localhost podman[301094]: 2026-02-01 09:48:55.867472062 +0000 UTC m=+0.083311309 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:48:55 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:49:00 localhost podman[236852]: time="2026-02-01T09:49:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:49:00 localhost podman[236852]: @ - - [01/Feb/2026:09:49:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:49:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:00 localhost podman[236852]: @ - - [01/Feb/2026:09:49:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17791 "" "Go-http-client/1.1" Feb 1 04:49:01 localhost openstack_network_exporter[239388]: ERROR 09:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:49:01 localhost openstack_network_exporter[239388]: Feb 1 04:49:01 localhost openstack_network_exporter[239388]: ERROR 09:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:49:01 localhost openstack_network_exporter[239388]: Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:49:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:49:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:49:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:49:06 localhost podman[301117]: 2026-02-01 09:49:06.819039212 +0000 UTC m=+0.081463311 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:49:06 localhost podman[301117]: 2026-02-01 09:49:06.858705184 +0000 UTC m=+0.121129283 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 1 04:49:06 localhost systemd[1]: tmp-crun.5OIIA6.mount: Deactivated successfully. Feb 1 04:49:06 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:49:06 localhost podman[301118]: 2026-02-01 09:49:06.880859307 +0000 UTC m=+0.141860352 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:49:06 localhost podman[301118]: 2026-02-01 09:49:06.890634743 +0000 UTC m=+0.151635758 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:49:06 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:49:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:49:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:49:11 localhost systemd[1]: tmp-crun.PX1KOe.mount: Deactivated successfully. Feb 1 04:49:11 localhost podman[301167]: 2026-02-01 09:49:11.863872057 +0000 UTC m=+0.076979012 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:49:11 localhost podman[301167]: 2026-02-01 09:49:11.896793707 +0000 UTC m=+0.109900672 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:49:11 localhost systemd[1]: tmp-crun.4pqjI0.mount: Deactivated successfully. Feb 1 04:49:11 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:49:11 localhost podman[301166]: 2026-02-01 09:49:11.919462687 +0000 UTC m=+0.132921453 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-01-22T05:09:47Z, version=9.7, managed_by=edpm_ansible, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Feb 1 04:49:11 localhost podman[301166]: 2026-02-01 09:49:11.930706289 +0000 UTC m=+0.144165045 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, config_id=openstack_network_exporter, architecture=x86_64, vcs-type=git) Feb 1 04:49:11 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:49:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.507255) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939359507409, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1027, "num_deletes": 251, "total_data_size": 1358557, "memory_usage": 1388096, "flush_reason": "Manual Compaction"} Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939359518960, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 870764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13146, "largest_seqno": 14167, "table_properties": {"data_size": 866270, "index_size": 2093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10920, "raw_average_key_size": 21, "raw_value_size": 856975, "raw_average_value_size": 1657, "num_data_blocks": 88, "num_entries": 517, "num_filter_entries": 517, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939301, "oldest_key_time": 1769939301, "file_creation_time": 1769939359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11743 microseconds, and 3788 cpu microseconds. Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.519009) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 870764 bytes OK Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.519030) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.521062) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.521088) EVENT_LOG_v1 {"time_micros": 1769939359521081, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.521114) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1353349, prev total WAL file size 1353349, number of live WAL files 2. Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.521966) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(850KB)], [15(22MB)] Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939359522024, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 24541632, "oldest_snapshot_seqno": -1} Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 11809 keys, 20524115 bytes, temperature: kUnknown Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939359649138, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 20524115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20457444, "index_size": 36042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29573, "raw_key_size": 319468, "raw_average_key_size": 27, "raw_value_size": 20256736, "raw_average_value_size": 1715, "num_data_blocks": 1356, "num_entries": 11809, "num_filter_entries": 11809, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769939359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.649485) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 20524115 bytes Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.651317) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.0 rd, 161.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 22.6 +0.0 blob) out(19.6 +0.0 blob), read-write-amplify(51.8) write-amplify(23.6) OK, records in: 12335, records dropped: 526 output_compression: NoCompression Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.651347) EVENT_LOG_v1 {"time_micros": 1769939359651334, "job": 6, "event": "compaction_finished", "compaction_time_micros": 127184, "compaction_time_cpu_micros": 52026, "output_level": 6, "num_output_files": 1, "total_output_size": 20524115, "num_input_records": 12335, "num_output_records": 11809, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939359651590, "job": 6, "event": "table_file_deletion", "file_number": 17} Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939359655577, "job": 6, "event": "table_file_deletion", "file_number": 15} Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.521852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.655608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.655613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.655615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.655618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:49:19 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:49:19.655621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:49:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:49:19 localhost podman[301204]: 2026-02-01 09:49:19.861246534 +0000 UTC m=+0.078069654 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true) Feb 1 04:49:19 localhost podman[301204]: 2026-02-01 09:49:19.895885269 +0000 UTC m=+0.112708349 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:49:19 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:49:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mgr fail"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/2103452742' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 e91: 6 total, 6 up, 6 in Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr handle_mgr_map Activating! Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr handle_mgr_map I am now activating Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604212"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604212"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604213"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604213"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005604215"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata", "id": "np0005604215"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604215.rwvxvg"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mds metadata", "who": "mds.np0005604215.rwvxvg"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604213.jdbvyh"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mds metadata", "who": "mds.np0005604213.jdbvyh"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005604212.tkdkxt"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mds metadata", "who": "mds.np0005604212.tkdkxt"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).mds e16 all = 0 Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).mds e16 all = 0 Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).mds e16 all = 0 Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604215.uhhqtv", "id": "np0005604215.uhhqtv"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr metadata", "who": "np0005604215.uhhqtv", "id": "np0005604215.uhhqtv"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604212.oynhpm", "id": "np0005604212.oynhpm"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr metadata", "who": "np0005604212.oynhpm", "id": "np0005604212.oynhpm"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604209.isqrps", "id": "np0005604209.isqrps"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr metadata", "who": "np0005604209.isqrps", "id": "np0005604209.isqrps"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mds metadata"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).mds e16 all = 1 Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd metadata"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mon metadata"} : dispatch Feb 1 04:49:21 localhost ceph-mgr[278126]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: balancer Feb 1 04:49:21 localhost ceph-mgr[278126]: [balancer INFO root] Starting Feb 1 04:49:21 localhost ceph-mgr[278126]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:49:21 Feb 1 04:49:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:49:21 localhost ceph-mgr[278126]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Feb 1 04:49:21 localhost systemd[1]: session-74.scope: Deactivated successfully. Feb 1 04:49:21 localhost systemd[1]: session-74.scope: Consumed 10.568s CPU time. Feb 1 04:49:21 localhost systemd-logind[761]: Session 74 logged out. Waiting for processes to exit. Feb 1 04:49:21 localhost systemd-logind[761]: Removed session 74. Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: cephadm Feb 1 04:49:21 localhost ceph-mgr[278126]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: crash Feb 1 04:49:21 localhost ceph-mgr[278126]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: devicehealth Feb 1 04:49:21 localhost ceph-mgr[278126]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: iostat Feb 1 04:49:21 localhost ceph-mgr[278126]: [devicehealth INFO root] Starting Feb 1 04:49:21 localhost ceph-mgr[278126]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: nfs Feb 1 04:49:21 localhost ceph-mgr[278126]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: orchestrator Feb 1 04:49:21 localhost ceph-mgr[278126]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: pg_autoscaler Feb 1 04:49:21 localhost ceph-mgr[278126]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: progress Feb 1 04:49:21 localhost ceph-mgr[278126]: [progress INFO root] Loading... Feb 1 04:49:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:49:21 localhost ceph-mgr[278126]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Feb 1 04:49:21 localhost ceph-mgr[278126]: [progress INFO root] Loaded OSDMap, ready. Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] recovery thread starting Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] starting setup Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: rbd_support Feb 1 04:49:21 localhost ceph-mgr[278126]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: restful Feb 1 04:49:21 localhost ceph-mgr[278126]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: status Feb 1 04:49:21 localhost ceph-mgr[278126]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: [restful INFO root] server_addr: :: server_port: 8003 Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: telemetry Feb 1 04:49:21 localhost ceph-mgr[278126]: [restful WARNING root] server not running: no certificate configured Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/mirror_snapshot_schedule"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/mirror_snapshot_schedule"} : dispatch Feb 1 04:49:21 localhost ceph-mgr[278126]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:49:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:49:21 localhost ceph-mgr[278126]: mgr load Constructed class from module: volumes Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] PerfHandler: starting Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_task_task: vms, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.434+0000 7f93f224a640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.435+0000 7f93f224a640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.435+0000 7f93f224a640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.435+0000 7f93f224a640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.435+0000 7f93f224a640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.439+0000 7f93ef244640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.439+0000 7f93ef244640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.439+0000 7f93ef244640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.439+0000 7f93ef244640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:49:21.439+0000 7f93ef244640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_task_task: volumes, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_task_task: images, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_task_task: backups, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TaskHandler: starting Feb 1 04:49:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/trash_purge_schedule"} v 0) Feb 1 04:49:21 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/trash_purge_schedule"} : dispatch Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Feb 1 04:49:21 localhost ceph-mgr[278126]: [rbd_support INFO root] setup complete Feb 1 04:49:21 localhost ceph-mon[298604]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: Activating manager daemon np0005604215.uhhqtv Feb 1 04:49:21 localhost ceph-mon[298604]: from='client.? 172.18.0.200:0/2103452742' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 1 04:49:21 localhost ceph-mon[298604]: Manager daemon np0005604215.uhhqtv is now available Feb 1 04:49:21 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/mirror_snapshot_schedule"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/mirror_snapshot_schedule"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/trash_purge_schedule"} : dispatch Feb 1 04:49:21 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005604215.uhhqtv/trash_purge_schedule"} : dispatch Feb 1 04:49:21 localhost sshd[301362]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:49:21 localhost systemd-logind[761]: New session 75 of user ceph-admin. Feb 1 04:49:21 localhost systemd[1]: Started Session 75 of User ceph-admin. Feb 1 04:49:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:22 localhost systemd[1]: tmp-crun.BBFa6K.mount: Deactivated successfully. Feb 1 04:49:22 localhost podman[301474]: 2026-02-01 09:49:22.678740978 +0000 UTC m=+0.102799329 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, release=1764794109, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., org.opencontainers.image.created=2025-12-08T17:28:53Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-12-08T17:28:53Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph) Feb 1 04:49:22 localhost podman[301474]: 2026-02-01 09:49:22.783647773 +0000 UTC m=+0.207706104 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.created=2025-12-08T17:28:53Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, build-date=2025-12-08T17:28:53Z, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1764794109, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5) Feb 1 04:49:22 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:49:22] ENGINE Bus STARTING Feb 1 04:49:22 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:49:22] ENGINE Bus STARTING Feb 1 04:49:22 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:49:22] ENGINE Serving on http://172.18.0.108:8765 Feb 1 04:49:22 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:49:22] ENGINE Serving on http://172.18.0.108:8765 Feb 1 04:49:23 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:49:23] ENGINE Serving on https://172.18.0.108:7150 Feb 1 04:49:23 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:49:23] ENGINE Serving on https://172.18.0.108:7150 Feb 1 04:49:23 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:49:23] ENGINE Bus STARTED Feb 1 04:49:23 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:49:23] ENGINE Bus STARTED Feb 1 04:49:23 localhost ceph-mgr[278126]: [cephadm INFO cherrypy.error] [01/Feb/2026:09:49:23] ENGINE Client ('172.18.0.108', 45716) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:49:23 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : [01/Feb/2026:09:49:23] ENGINE Client ('172.18.0.108', 45716) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:49:23 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:23 localhost ceph-mon[298604]: [01/Feb/2026:09:49:22] ENGINE Bus STARTING Feb 1 04:49:23 localhost ceph-mon[298604]: [01/Feb/2026:09:49:22] ENGINE Serving on http://172.18.0.108:8765 Feb 1 04:49:23 localhost ceph-mon[298604]: [01/Feb/2026:09:49:23] ENGINE Serving on https://172.18.0.108:7150 Feb 1 04:49:23 localhost ceph-mon[298604]: [01/Feb/2026:09:49:23] ENGINE Bus STARTED Feb 1 04:49:23 localhost ceph-mon[298604]: [01/Feb/2026:09:49:23] ENGINE Client ('172.18.0.108', 45716) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 1 04:49:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:49:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:49:23 localhost ceph-mgr[278126]: [devicehealth INFO root] Check health Feb 1 04:49:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:49:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:49:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:49:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:49:24 localhost ceph-mon[298604]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 1 04:49:24 localhost ceph-mon[298604]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 1 04:49:24 localhost ceph-mon[298604]: Cluster is now healthy Feb 1 04:49:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 1 04:49:24 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 1 04:49:24 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:49:24 localhost ceph-mgr[278126]: [cephadm INFO root] Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:49:24 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:49:24 localhost ceph-mgr[278126]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:49:24 localhost ceph-mgr[278126]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 1 04:49:24 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 1 04:49:24 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:49:24 localhost ceph-mgr[278126]: [cephadm INFO root] Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:49:24 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:49:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:49:24 localhost ceph-mgr[278126]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:49:24 localhost ceph-mgr[278126]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:49:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:49:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:49:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 1 04:49:25 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 1 04:49:25 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mgr[278126]: [cephadm INFO root] Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:49:25 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:49:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:49:25 localhost ceph-mgr[278126]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:49:25 localhost ceph-mgr[278126]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:49:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:49:25 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:49:25 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:49:25 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:25 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:49:25 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:49:25 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:49:25 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:49:25 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:49:25 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:49:25 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/etc/ceph/ceph.conf Feb 1 04:49:25 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/etc/ceph/ceph.conf Feb 1 04:49:25 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/etc/ceph/ceph.conf Feb 1 04:49:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:49:26 localhost podman[301949]: 2026-02-01 09:49:26.00146639 +0000 UTC m=+0.085074824 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:49:26 localhost podman[301949]: 2026-02-01 09:49:26.038639544 +0000 UTC m=+0.122248028 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:49:26 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:49:26 localhost ceph-mgr[278126]: mgr.server handle_open ignoring open from mgr.np0005604213.caiaeh 172.18.0.107:0/309736900; not ready for session (expect reconnect) Feb 1 04:49:26 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:49:26 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:49:26 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:49:26 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:49:26 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:49:26 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:49:27 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:49:27 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:49:27 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:49:27 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:49:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005604213.caiaeh", "id": "np0005604213.caiaeh"} v 0) Feb 1 04:49:27 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "mgr metadata", "who": "np0005604213.caiaeh", "id": "np0005604213.caiaeh"} : dispatch Feb 1 04:49:27 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:49:27 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:49:27 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.conf Feb 1 04:49:27 localhost ceph-mgr[278126]: [cephadm INFO cephadm.serve] Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:49:27 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:49:27 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:49:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:49:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:49:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:49:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:49:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:49:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:49:27 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 0 B/s wr, 20 op/s Feb 1 04:49:27 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev b203ac22-5257-4a69-8274-fe65fc85b21f (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:49:27 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev b203ac22-5257-4a69-8274-fe65fc85b21f (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:49:27 localhost ceph-mgr[278126]: [progress INFO root] Completed event b203ac22-5257-4a69-8274-fe65fc85b21f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:49:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:49:27 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:49:28 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:49:28 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:49:28 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 1 04:49:28 localhost ceph-mon[298604]: Updating np0005604215.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:49:28 localhost ceph-mon[298604]: Updating np0005604212.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:49:28 localhost ceph-mon[298604]: Updating np0005604213.localdomain:/var/lib/ceph/33fac0b9-80c7-560f-918a-c92d3021ca1e/config/ceph.client.admin.keyring Feb 1 04:49:28 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:28 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:28 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:28 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:28 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:28 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:28 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:49:28 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:49:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:49:28 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:49:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:49:28 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 25c0d117-e808-4c72-b186-7a21eb50bff1 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:49:28 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 25c0d117-e808-4c72-b186-7a21eb50bff1 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:49:28 localhost ceph-mgr[278126]: [progress INFO root] Completed event 25c0d117-e808-4c72-b186-7a21eb50bff1 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:49:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:49:28 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:49:29 localhost ceph-mon[298604]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 1 04:49:29 localhost ceph-mon[298604]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 1 04:49:29 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:49:29 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:29 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 0 B/s wr, 15 op/s Feb 1 04:49:30 localhost podman[236852]: time="2026-02-01T09:49:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:49:30 localhost podman[236852]: @ - - [01/Feb/2026:09:49:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:49:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:30 localhost podman[236852]: @ - - [01/Feb/2026:09:49:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17798 "" "Go-http-client/1.1" Feb 1 04:49:31 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:49:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:49:31 localhost openstack_network_exporter[239388]: ERROR 09:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:49:31 localhost openstack_network_exporter[239388]: Feb 1 04:49:31 localhost openstack_network_exporter[239388]: ERROR 09:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:49:31 localhost openstack_network_exporter[239388]: Feb 1 04:49:31 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 0 B/s wr, 12 op/s Feb 1 04:49:32 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:49:33 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Feb 1 04:49:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:35 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Feb 1 04:49:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:49:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:49:37 localhost systemd[1]: tmp-crun.waYBBU.mount: Deactivated successfully. Feb 1 04:49:37 localhost podman[302452]: 2026-02-01 09:49:37.864665239 +0000 UTC m=+0.077094544 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 1 04:49:37 localhost podman[302453]: 2026-02-01 09:49:37.884722286 +0000 UTC m=+0.089705229 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:49:37 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Feb 1 04:49:37 localhost podman[302453]: 2026-02-01 09:49:37.957865586 +0000 UTC m=+0.162848529 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:49:37 localhost podman[302452]: 2026-02-01 09:49:37.974703103 +0000 UTC m=+0.187132368 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:49:37 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:49:38 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:49:38 localhost systemd[1]: tmp-crun.KvoChO.mount: Deactivated successfully. Feb 1 04:49:39 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:49:41.766 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:49:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:49:41.766 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:49:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:49:41.766 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:49:41 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:49:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:49:42 localhost podman[302500]: 2026-02-01 09:49:42.879130272 +0000 UTC m=+0.090283707 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, vcs-type=git, architecture=x86_64, release=1769056855, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:49:42 localhost podman[302500]: 2026-02-01 09:49:42.919819797 +0000 UTC m=+0.130973282 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, config_id=openstack_network_exporter, release=1769056855, container_name=openstack_network_exporter) Feb 1 04:49:42 localhost systemd[1]: tmp-crun.oEIPpL.mount: Deactivated successfully. Feb 1 04:49:42 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:49:42 localhost podman[302501]: 2026-02-01 09:49:42.937326464 +0000 UTC m=+0.144962049 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 1 04:49:42 localhost podman[302501]: 2026-02-01 09:49:42.943827568 +0000 UTC m=+0.151463193 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 04:49:42 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:49:43 localhost nova_compute[274317]: 2026-02-01 09:49:43.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:43 localhost nova_compute[274317]: 2026-02-01 09:49:43.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:49:43 localhost nova_compute[274317]: 2026-02-01 09:49:43.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:49:43 localhost nova_compute[274317]: 2026-02-01 09:49:43.115 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:49:43 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:44 localhost nova_compute[274317]: 2026-02-01 09:49:44.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:45 localhost nova_compute[274317]: 2026-02-01 09:49:45.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:45 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:46 localhost nova_compute[274317]: 2026-02-01 09:49:46.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:46 localhost nova_compute[274317]: 2026-02-01 09:49:46.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:46 localhost nova_compute[274317]: 2026-02-01 09:49:46.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:46 localhost nova_compute[274317]: 2026-02-01 09:49:46.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:49:47 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.131 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.132 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.132 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.133 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.133 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:49:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:49:48 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3814061159' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.580 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.788 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.790 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12364MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.790 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.791 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.933 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.934 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:49:48 localhost nova_compute[274317]: 2026-02-01 09:49:48.950 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:49:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:49:49 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/865720152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:49:49 localhost nova_compute[274317]: 2026-02-01 09:49:49.424 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:49:49 localhost nova_compute[274317]: 2026-02-01 09:49:49.431 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:49:49 localhost nova_compute[274317]: 2026-02-01 09:49:49.468 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:49:49 localhost nova_compute[274317]: 2026-02-01 09:49:49.470 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:49:49 localhost nova_compute[274317]: 2026-02-01 09:49:49.471 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:49:49 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:50 localhost nova_compute[274317]: 2026-02-01 09:49:50.467 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:50 localhost nova_compute[274317]: 2026-02-01 09:49:50.530 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:50 localhost nova_compute[274317]: 2026-02-01 09:49:50.531 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:49:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:49:50 localhost podman[302584]: 2026-02-01 09:49:50.865863706 +0000 UTC m=+0.081427850 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:49:50 localhost podman[302584]: 2026-02-01 09:49:50.904984931 +0000 UTC m=+0.120549065 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 1 04:49:50 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:49:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:49:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:49:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:49:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:49:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:49:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:49:51 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:53 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:49:55 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:49:56 localhost podman[302603]: 2026-02-01 09:49:56.863613501 +0000 UTC m=+0.079523270 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:49:56 localhost podman[302603]: 2026-02-01 09:49:56.872441368 +0000 UTC m=+0.088351127 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:49:56 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:49:57 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:49:59 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:00 localhost podman[236852]: time="2026-02-01T09:50:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:50:00 localhost podman[236852]: @ - - [01/Feb/2026:09:50:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:50:00 localhost ceph-mon[298604]: overall HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 04:50:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:00 localhost podman[236852]: @ - - [01/Feb/2026:09:50:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17789 "" "Go-http-client/1.1" Feb 1 04:50:01 localhost openstack_network_exporter[239388]: ERROR 09:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:50:01 localhost openstack_network_exporter[239388]: Feb 1 04:50:01 localhost openstack_network_exporter[239388]: ERROR 09:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:50:01 localhost openstack_network_exporter[239388]: Feb 1 04:50:01 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:03 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:05 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:07 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:50:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:50:08 localhost podman[302624]: 2026-02-01 09:50:08.871954962 +0000 UTC m=+0.084484486 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:50:08 localhost podman[302625]: 2026-02-01 09:50:08.940325972 +0000 UTC m=+0.150313236 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:50:08 localhost podman[302625]: 2026-02-01 09:50:08.952698898 +0000 UTC m=+0.162686162 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:50:08 localhost podman[302624]: 2026-02-01 09:50:08.964671622 +0000 UTC m=+0.177201176 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller) Feb 1 04:50:08 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:50:08 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:50:09 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:11 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:50:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:50:13 localhost podman[302671]: 2026-02-01 09:50:13.865477105 +0000 UTC m=+0.080808880 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9/ubi-minimal, architecture=x86_64, build-date=2026-01-22T05:09:47Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, vcs-type=git, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 04:50:13 localhost podman[302671]: 2026-02-01 09:50:13.878069478 +0000 UTC m=+0.093401243 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, config_id=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, release=1769056855, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64) Feb 1 04:50:13 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:50:13 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:13 localhost podman[302672]: 2026-02-01 09:50:13.922247364 +0000 UTC m=+0.135601537 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Feb 1 04:50:13 localhost podman[302672]: 2026-02-01 09:50:13.956521743 +0000 UTC m=+0.169875916 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:50:13 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:50:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:15 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:17 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:19 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:50:21 Feb 1 04:50:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:50:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:50:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['vms', 'manila_data', 'backups', 'images', 'volumes', '.mgr', 'manila_metadata'] Feb 1 04:50:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:50:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.0021774090359203426 quantized to 16 (current 16) Feb 1 04:50:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:50:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:50:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:50:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:50:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:50:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:50:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:50:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:50:21 localhost podman[302707]: 2026-02-01 09:50:21.886994442 +0000 UTC m=+0.096061975 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 1 04:50:21 localhost podman[302707]: 2026-02-01 09:50:21.89752241 +0000 UTC m=+0.106589913 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 1 04:50:21 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:50:21 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:23 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:25 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:50:27 localhost podman[302726]: 2026-02-01 09:50:27.864503709 +0000 UTC m=+0.080012975 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:50:27 localhost podman[302726]: 2026-02-01 09:50:27.879900309 +0000 UTC m=+0.095409525 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:50:27 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:50:27 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:50:29 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:50:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:50:29 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:50:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:50:29 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 4f8e6ba9-c226-4470-b127-1e620a7c18ec (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:50:29 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 4f8e6ba9-c226-4470-b127-1e620a7c18ec (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:50:29 localhost ceph-mgr[278126]: [progress INFO root] Completed event 4f8e6ba9-c226-4470-b127-1e620a7c18ec (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:50:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:50:29 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:50:29 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:30 localhost podman[236852]: time="2026-02-01T09:50:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:50:30 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:50:30 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:50:30 localhost podman[236852]: @ - - [01/Feb/2026:09:50:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153530 "" "Go-http-client/1.1" Feb 1 04:50:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:30 localhost podman[236852]: @ - - [01/Feb/2026:09:50:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17789 "" "Go-http-client/1.1" Feb 1 04:50:31 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:50:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:50:31 localhost openstack_network_exporter[239388]: ERROR 09:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:50:31 localhost openstack_network_exporter[239388]: Feb 1 04:50:31 localhost openstack_network_exporter[239388]: ERROR 09:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:50:31 localhost openstack_network_exporter[239388]: Feb 1 04:50:31 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:32 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:50:33 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:35 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:37 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:50:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:50:39 localhost podman[302838]: 2026-02-01 09:50:39.877417195 +0000 UTC m=+0.085065502 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:50:39 localhost podman[302837]: 2026-02-01 09:50:39.924244575 +0000 UTC m=+0.132310415 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3) Feb 1 04:50:39 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:39 localhost podman[302838]: 2026-02-01 09:50:39.941816122 +0000 UTC m=+0.149464469 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:50:39 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:50:40 localhost podman[302837]: 2026-02-01 09:50:40.005716164 +0000 UTC m=+0.213781994 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller) Feb 1 04:50:40 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:50:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:40 localhost nova_compute[274317]: 2026-02-01 09:50:40.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:40 localhost nova_compute[274317]: 2026-02-01 09:50:40.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 1 04:50:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:41.768 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:50:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:41.769 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:50:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:41.769 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:50:41 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:43 localhost nova_compute[274317]: 2026-02-01 09:50:43.131 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:43 localhost nova_compute[274317]: 2026-02-01 09:50:43.132 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:50:43 localhost nova_compute[274317]: 2026-02-01 09:50:43.132 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:50:43 localhost nova_compute[274317]: 2026-02-01 09:50:43.159 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:50:43 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:50:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:50:44 localhost podman[302886]: 2026-02-01 09:50:44.847273752 +0000 UTC m=+0.065793251 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vcs-type=git, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-01-22T05:09:47Z, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 1 04:50:44 localhost podman[302886]: 2026-02-01 09:50:44.859812233 +0000 UTC m=+0.078331782 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, container_name=openstack_network_exporter, release=1769056855, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 1 04:50:44 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:50:44 localhost podman[302887]: 2026-02-01 09:50:44.861823656 +0000 UTC m=+0.073508803 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:50:44 localhost podman[302887]: 2026-02-01 09:50:44.94183621 +0000 UTC m=+0.153521357 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:50:44 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:50:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:45 localhost nova_compute[274317]: 2026-02-01 09:50:45.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:50:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5755 writes, 24K keys, 5755 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5755 writes, 912 syncs, 6.31 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 250 writes, 446 keys, 250 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 250 writes, 125 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:50:45 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:46 localhost nova_compute[274317]: 2026-02-01 09:50:46.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:46 localhost nova_compute[274317]: 2026-02-01 09:50:46.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:46 localhost nova_compute[274317]: 2026-02-01 09:50:46.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:47 localhost nova_compute[274317]: 2026-02-01 09:50:47.161 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:47 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.169 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.170 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.170 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.170 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.171 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:50:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:50:48 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1703289481' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.628 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.845 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.846 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=12355MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.847 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:50:48 localhost nova_compute[274317]: 2026-02-01 09:50:48.847 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.058 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.058 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.118 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.179 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.179 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.193 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.217 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.231 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:50:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:50:49 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1467814334' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.689 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.696 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.767 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.771 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:50:49 localhost nova_compute[274317]: 2026-02-01 09:50:49.771 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:50:49 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:49.826 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:50:49 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:49.827 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:50:49 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:50:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5426 writes, 23K keys, 5426 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5426 writes, 740 syncs, 7.33 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 109 writes, 398 keys, 109 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s#012Interval WAL: 109 writes, 47 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 04:50:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:50:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:50:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:50:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 1 04:50:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 1 04:50:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:50:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 1 04:50:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 1 04:50:51 localhost nova_compute[274317]: 2026-02-01 09:50:51.772 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:51 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Feb 1 04:50:52 localhost nova_compute[274317]: 2026-02-01 09:50:52.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:52 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:52.737 259225 INFO oslo.privsep.daemon [None req-b3c76f87-d3d1-4736-887d-1f3db49b2c96 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpdcrtwudu/privsep.sock']#033[00m Feb 1 04:50:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:50:52 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:52.830 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:50:52 localhost podman[302973]: 2026-02-01 09:50:52.868261973 +0000 UTC m=+0.083648698 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:50:52 localhost podman[302973]: 2026-02-01 09:50:52.907842327 +0000 UTC m=+0.123229032 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:50:52 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:50:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:53.329 259225 INFO oslo.privsep.daemon [None req-b3c76f87-d3d1-4736-887d-1f3db49b2c96 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Feb 1 04:50:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:53.227 302993 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 1 04:50:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:53.231 302993 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 1 04:50:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:53.235 302993 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Feb 1 04:50:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:53.235 302993 INFO oslo.privsep.daemon [-] privsep daemon running as pid 302993#033[00m Feb 1 04:50:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:53.870 259225 INFO oslo.privsep.daemon [None req-b3c76f87-d3d1-4736-887d-1f3db49b2c96 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp8pr7eztf/privsep.sock']#033[00m Feb 1 04:50:53 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Feb 1 04:50:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:54.457 259225 INFO oslo.privsep.daemon [None req-b3c76f87-d3d1-4736-887d-1f3db49b2c96 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Feb 1 04:50:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:54.361 303002 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 1 04:50:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:54.366 303002 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 1 04:50:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:54.370 303002 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Feb 1 04:50:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:54.371 303002 INFO oslo.privsep.daemon [-] privsep daemon running as pid 303002#033[00m Feb 1 04:50:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:50:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:55.310 259225 INFO oslo.privsep.daemon [None req-b3c76f87-d3d1-4736-887d-1f3db49b2c96 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpoodm2ude/privsep.sock']#033[00m Feb 1 04:50:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e92 e92: 6 total, 6 up, 6 in Feb 1 04:50:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:55.916 259225 INFO oslo.privsep.daemon [None req-b3c76f87-d3d1-4736-887d-1f3db49b2c96 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Feb 1 04:50:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:55.771 303014 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 1 04:50:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:55.777 303014 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 1 04:50:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:55.781 303014 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Feb 1 04:50:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:55.781 303014 INFO oslo.privsep.daemon [-] privsep daemon running as pid 303014#033[00m Feb 1 04:50:55 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v52: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 307 B/s wr, 0 op/s Feb 1 04:50:57 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:57.239 259225 INFO neutron.agent.linux.ip_lib [None req-b3c76f87-d3d1-4736-887d-1f3db49b2c96 - - - - - -] Device tap76e2ea9b-94 cannot be used as it has no MAC address#033[00m Feb 1 04:50:57 localhost kernel: device tap76e2ea9b-94 entered promiscuous mode Feb 1 04:50:57 localhost NetworkManager[5972]: [1769939457.3422] manager: (tap76e2ea9b-94): new Generic device (/org/freedesktop/NetworkManager/Devices/13) Feb 1 04:50:57 localhost ovn_controller[152787]: 2026-02-01T09:50:57Z|00025|binding|INFO|Claiming lport 76e2ea9b-94ac-478b-8fb0-863a2f63759c for this chassis. Feb 1 04:50:57 localhost ovn_controller[152787]: 2026-02-01T09:50:57Z|00026|binding|INFO|76e2ea9b-94ac-478b-8fb0-863a2f63759c: Claiming unknown Feb 1 04:50:57 localhost systemd-udevd[303029]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:50:57 localhost journal[224955]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, ) Feb 1 04:50:57 localhost journal[224955]: hostname: np0005604215.localdomain Feb 1 04:50:57 localhost journal[224955]: ethtool ioctl error on tap76e2ea9b-94: No such device Feb 1 04:50:57 localhost journal[224955]: ethtool ioctl error on tap76e2ea9b-94: No such device Feb 1 04:50:57 localhost journal[224955]: ethtool ioctl error on tap76e2ea9b-94: No such device Feb 1 04:50:57 localhost ovn_controller[152787]: 2026-02-01T09:50:57Z|00027|binding|INFO|Setting lport 76e2ea9b-94ac-478b-8fb0-863a2f63759c ovn-installed in OVS Feb 1 04:50:57 localhost journal[224955]: ethtool ioctl error on tap76e2ea9b-94: No such device Feb 1 04:50:57 localhost journal[224955]: ethtool ioctl error on tap76e2ea9b-94: No such device Feb 1 04:50:57 localhost journal[224955]: ethtool ioctl error on tap76e2ea9b-94: No such device Feb 1 04:50:57 localhost journal[224955]: ethtool ioctl error on tap76e2ea9b-94: No such device Feb 1 04:50:57 localhost journal[224955]: ethtool ioctl error on tap76e2ea9b-94: No such device Feb 1 04:50:57 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 125 MiB data, 645 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s Feb 1 04:50:58 localhost nova_compute[274317]: 2026-02-01 09:50:58.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:50:58 localhost nova_compute[274317]: 2026-02-01 09:50:58.102 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 1 04:50:58 localhost ovn_controller[152787]: 2026-02-01T09:50:58Z|00028|binding|INFO|Setting lport 76e2ea9b-94ac-478b-8fb0-863a2f63759c up in Southbound Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.281 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.199.3/24', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29b63d789cd547019a15ada42140b6b4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c0b803-4f99-4e3e-ab61-20b4c613d0ad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=76e2ea9b-94ac-478b-8fb0-863a2f63759c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.283 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 76e2ea9b-94ac-478b-8fb0-863a2f63759c in datapath 7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f bound to our chassis#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.286 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 8684fce5-bb3f-4eb0-bf35-2f3948d473ca IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.287 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.288 158655 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpejz9ttd8/privsep.sock']#033[00m Feb 1 04:50:58 localhost nova_compute[274317]: 2026-02-01 09:50:58.506 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 1 04:50:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:50:58 localhost podman[303106]: Feb 1 04:50:58 localhost podman[303106]: 2026-02-01 09:50:58.79551548 +0000 UTC m=+0.041609858 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:50:58 localhost podman[303118]: 2026-02-01 09:50:58.904663562 +0000 UTC m=+0.119649150 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:50:58 localhost podman[303106]: 2026-02-01 09:50:58.921856169 +0000 UTC m=+0.167950507 container create 3a3e234999fe81c21d45fee4ad4a786614993ad042362467bfd117b0c09b08b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:50:58 localhost podman[303118]: 2026-02-01 09:50:58.943818052 +0000 UTC m=+0.158803650 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 04:50:58 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.970 158655 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.971 158655 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpejz9ttd8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.849 303130 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.856 303130 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.859 303130 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.859 303130 INFO oslo.privsep.daemon [-] privsep daemon running as pid 303130#033[00m Feb 1 04:50:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:58.974 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[22d5d1a9-1d83-4a1a-b59e-bd0c994b62e7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:50:58 localhost systemd[1]: Started libpod-conmon-3a3e234999fe81c21d45fee4ad4a786614993ad042362467bfd117b0c09b08b8.scope. Feb 1 04:50:58 localhost systemd[1]: Started libcrun container. Feb 1 04:50:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1832dca9d10799b404d2490d5c7e80ce7bd78dcf138fd23f5c49d32d39ab97fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:50:59 localhost podman[303106]: 2026-02-01 09:50:59.010423409 +0000 UTC m=+0.256517717 container init 3a3e234999fe81c21d45fee4ad4a786614993ad042362467bfd117b0c09b08b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 1 04:50:59 localhost podman[303106]: 2026-02-01 09:50:59.019469481 +0000 UTC m=+0.265563809 container start 3a3e234999fe81c21d45fee4ad4a786614993ad042362467bfd117b0c09b08b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:50:59 localhost dnsmasq[303151]: started, version 2.85 cachesize 150 Feb 1 04:50:59 localhost dnsmasq[303151]: DNS service limited to local subnets Feb 1 04:50:59 localhost dnsmasq[303151]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:50:59 localhost dnsmasq[303151]: warning: no upstream servers configured Feb 1 04:50:59 localhost dnsmasq-dhcp[303151]: DHCP, static leases only on 192.168.199.0, lease time 1d Feb 1 04:50:59 localhost dnsmasq[303151]: read /var/lib/neutron/dhcp/7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f/addn_hosts - 0 addresses Feb 1 04:50:59 localhost dnsmasq-dhcp[303151]: read /var/lib/neutron/dhcp/7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f/host Feb 1 04:50:59 localhost dnsmasq-dhcp[303151]: read /var/lib/neutron/dhcp/7fc4cf94-32b4-4b60-bdef-3e96b4aa6c2f/opts Feb 1 04:50:59 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:50:59.370 259225 INFO neutron.agent.dhcp.agent [None req-4160db1a-252e-4d10-b129-8d06cb2b1c89 - - - - - -] DHCP configuration for ports {'0d5f687b-71f8-4b2a-9e1c-1e5f246bb2a3'} is completed#033[00m Feb 1 04:50:59 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:59.431 303130 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:50:59 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:59.431 303130 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:50:59 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:59.431 303130 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:50:59 localhost ovn_metadata_agent[158650]: 2026-02-01 09:50:59.524 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[d917c27f-e2cb-4af4-b58c-205875afdb03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:50:59 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v54: 177 pgs: 177 active+clean; 125 MiB data, 645 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s Feb 1 04:51:00 localhost podman[236852]: time="2026-02-01T09:51:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:51:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 e93: 6 total, 6 up, 6 in Feb 1 04:51:00 localhost podman[236852]: @ - - [01/Feb/2026:09:51:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 04:51:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:00 localhost podman[236852]: @ - - [01/Feb/2026:09:51:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18268 "" "Go-http-client/1.1" Feb 1 04:51:01 localhost openstack_network_exporter[239388]: ERROR 09:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:51:01 localhost openstack_network_exporter[239388]: Feb 1 04:51:01 localhost openstack_network_exporter[239388]: ERROR 09:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:51:01 localhost openstack_network_exporter[239388]: Feb 1 04:51:01 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v56: 177 pgs: 177 active+clean; 125 MiB data, 645 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.406 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:51:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:51:03 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 5.0 MiB/s wr, 46 op/s Feb 1 04:51:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:05 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s Feb 1 04:51:07 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s Feb 1 04:51:09 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s Feb 1 04:51:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:51:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:51:10 localhost systemd[1]: tmp-crun.ZTKjpd.mount: Deactivated successfully. Feb 1 04:51:10 localhost podman[303154]: 2026-02-01 09:51:10.851711862 +0000 UTC m=+0.074413920 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 1 04:51:10 localhost podman[303154]: 2026-02-01 09:51:10.922644653 +0000 UTC m=+0.145346721 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:51:10 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:51:10 localhost podman[303155]: 2026-02-01 09:51:10.993174292 +0000 UTC m=+0.209896703 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:51:11 localhost podman[303155]: 2026-02-01 09:51:11.005735073 +0000 UTC m=+0.222457504 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:51:11 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:51:11 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s Feb 1 04:51:13 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 15 op/s Feb 1 04:51:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:51:15 localhost podman[303201]: 2026-02-01 09:51:15.866682644 +0000 UTC m=+0.073749440 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 1 04:51:15 localhost podman[303200]: 2026-02-01 09:51:15.924762574 +0000 UTC m=+0.135678120 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, build-date=2026-01-22T05:09:47Z, release=1769056855, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, distribution-scope=public, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, version=9.7) Feb 1 04:51:15 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:15 localhost podman[303201]: 2026-02-01 09:51:15.947866304 +0000 UTC m=+0.154933100 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:51:15 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:51:15 localhost podman[303200]: 2026-02-01 09:51:15.966801555 +0000 UTC m=+0.177717101 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1769056855, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, managed_by=edpm_ansible, name=ubi9/ubi-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 1 04:51:15 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:51:17 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:19 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:51:21 Feb 1 04:51:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:51:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:51:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['.mgr', 'images', 'backups', 'volumes', 'manila_data', 'vms', 'manila_metadata'] Feb 1 04:51:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:51:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002170138888888889 quantized to 16 (current 16) Feb 1 04:51:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:51:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:51:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:51:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:51:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:51:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:51:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:51:21 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:51:23 localhost podman[303239]: 2026-02-01 09:51:23.862640067 +0000 UTC m=+0.080912814 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:51:23 localhost podman[303239]: 2026-02-01 09:51:23.875747034 +0000 UTC m=+0.094019792 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:51:23 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:51:23 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:25 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:27 localhost ovn_controller[152787]: 2026-02-01T09:51:27Z|00029|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory Feb 1 04:51:27 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:51:29 localhost podman[303276]: 2026-02-01 09:51:29.833242606 +0000 UTC m=+0.084880387 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:51:29 localhost podman[303276]: 2026-02-01 09:51:29.846812249 +0000 UTC m=+0.098450110 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:51:29 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:51:29 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:30 localhost podman[236852]: time="2026-02-01T09:51:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:51:30 localhost podman[236852]: @ - - [01/Feb/2026:09:51:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 04:51:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:30 localhost podman[236852]: @ - - [01/Feb/2026:09:51:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18270 "" "Go-http-client/1.1" Feb 1 04:51:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:51:30 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:51:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:51:30 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:51:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:51:30 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 6adb7234-316a-43c5-a8de-c5b6e5f01375 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:51:30 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 6adb7234-316a-43c5-a8de-c5b6e5f01375 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:51:30 localhost ceph-mgr[278126]: [progress INFO root] Completed event 6adb7234-316a-43c5-a8de-c5b6e5f01375 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:51:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:51:30 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:51:31 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:51:31 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:51:31 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:51:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:51:31 localhost openstack_network_exporter[239388]: ERROR 09:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:51:31 localhost openstack_network_exporter[239388]: Feb 1 04:51:31 localhost openstack_network_exporter[239388]: ERROR 09:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:51:31 localhost openstack_network_exporter[239388]: Feb 1 04:51:31 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:32 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:51:33 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:51:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2524954826' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:51:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:51:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2524954826' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:51:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:35 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:37 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:39 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:51:41.769 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:51:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:51:41.770 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:51:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:51:41.770 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:51:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:51:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:51:41 localhost systemd[1]: tmp-crun.Y4H479.mount: Deactivated successfully. Feb 1 04:51:41 localhost podman[303368]: 2026-02-01 09:51:41.870090576 +0000 UTC m=+0.083851165 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:51:41 localhost podman[303368]: 2026-02-01 09:51:41.914732898 +0000 UTC m=+0.128493457 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Feb 1 04:51:41 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:51:41 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:41 localhost podman[303369]: 2026-02-01 09:51:41.916150621 +0000 UTC m=+0.128223198 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:51:41 localhost podman[303369]: 2026-02-01 09:51:41.996232938 +0000 UTC m=+0.208305515 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:51:42 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:51:43 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:45 localhost nova_compute[274317]: 2026-02-01 09:51:45.506 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:45 localhost nova_compute[274317]: 2026-02-01 09:51:45.507 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:51:45 localhost nova_compute[274317]: 2026-02-01 09:51:45.507 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:51:45 localhost nova_compute[274317]: 2026-02-01 09:51:45.523 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:51:45 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:51:46 localhost podman[303414]: 2026-02-01 09:51:46.867425487 +0000 UTC m=+0.078393205 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 1 04:51:46 localhost podman[303414]: 2026-02-01 09:51:46.878566434 +0000 UTC m=+0.089534182 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7) Feb 1 04:51:46 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:51:46 localhost podman[303415]: 2026-02-01 09:51:46.920619745 +0000 UTC m=+0.130625972 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Feb 1 04:51:46 localhost podman[303415]: 2026-02-01 09:51:46.951091215 +0000 UTC m=+0.161097462 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:51:46 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:51:47 localhost nova_compute[274317]: 2026-02-01 09:51:47.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:47 localhost nova_compute[274317]: 2026-02-01 09:51:47.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:47 localhost nova_compute[274317]: 2026-02-01 09:51:47.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:47 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:48 localhost nova_compute[274317]: 2026-02-01 09:51:48.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:48 localhost nova_compute[274317]: 2026-02-01 09:51:48.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:48 localhost nova_compute[274317]: 2026-02-01 09:51:48.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:51:49 localhost ovn_metadata_agent[158650]: 2026-02-01 09:51:49.873 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:51:49 localhost ovn_metadata_agent[158650]: 2026-02-01 09:51:49.874 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:51:49 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.116 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.117 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.139 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.139 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.140 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.140 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.140 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:51:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:51:50 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2994643580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.592 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.770 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.772 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11992MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.772 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.773 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.923 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.923 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:51:50 localhost nova_compute[274317]: 2026-02-01 09:51:50.984 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:51:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:51:51 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3487162458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:51:51 localhost nova_compute[274317]: 2026-02-01 09:51:51.408 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:51:51 localhost nova_compute[274317]: 2026-02-01 09:51:51.413 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:51:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:51:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:51:51 localhost nova_compute[274317]: 2026-02-01 09:51:51.439 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:51:51 localhost nova_compute[274317]: 2026-02-01 09:51:51.441 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:51:51 localhost nova_compute[274317]: 2026-02-01 09:51:51.442 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:51:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:51:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:51:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:51:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:51:51 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:53 localhost nova_compute[274317]: 2026-02-01 09:51:53.425 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:51:53 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:51:54 localhost podman[303496]: 2026-02-01 09:51:54.866375492 +0000 UTC m=+0.080891232 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 1 04:51:54 localhost podman[303496]: 2026-02-01 09:51:54.876200398 +0000 UTC m=+0.090716138 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:51:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:51:54.877 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:51:54 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:51:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:51:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:51:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:00 localhost podman[236852]: time="2026-02-01T09:52:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:52:00 localhost podman[236852]: @ - - [01/Feb/2026:09:52:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 04:52:00 localhost podman[236852]: @ - - [01/Feb/2026:09:52:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18273 "" "Go-http-client/1.1" Feb 1 04:52:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:52:00 localhost systemd[1]: tmp-crun.0SPMoT.mount: Deactivated successfully. Feb 1 04:52:00 localhost podman[303515]: 2026-02-01 09:52:00.87320184 +0000 UTC m=+0.087208539 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:52:00 localhost podman[303515]: 2026-02-01 09:52:00.906614432 +0000 UTC m=+0.120621081 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:52:00 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:52:01 localhost openstack_network_exporter[239388]: ERROR 09:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:52:01 localhost openstack_network_exporter[239388]: Feb 1 04:52:01 localhost openstack_network_exporter[239388]: ERROR 09:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:52:01 localhost openstack_network_exporter[239388]: Feb 1 04:52:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:02 localhost neutron_sriov_agent[252054]: 2026-02-01 09:52:02.741 2 INFO neutron.agent.securitygroups_rpc [None req-3ef8af06-8ebd-433a-810c-d499d03d752f 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Security group member updated ['f05aaf36-904c-44ae-a203-34e61744db7d']#033[00m Feb 1 04:52:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:04 localhost neutron_sriov_agent[252054]: 2026-02-01 09:52:04.978 2 INFO neutron.agent.securitygroups_rpc [None req-48e73cc9-05d5-4151-9bdf-df0a5d68c81e 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Security group member updated ['f05aaf36-904c-44ae-a203-34e61744db7d']#033[00m Feb 1 04:52:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:05.497 259225 INFO neutron.agent.linux.ip_lib [None req-e9ae3fea-bfdb-4e14-8ee9-6ee644d63438 - - - - - -] Device tapc8e9dce8-3c cannot be used as it has no MAC address#033[00m Feb 1 04:52:05 localhost kernel: device tapc8e9dce8-3c entered promiscuous mode Feb 1 04:52:05 localhost ovn_controller[152787]: 2026-02-01T09:52:05Z|00030|binding|INFO|Claiming lport c8e9dce8-3cef-4d4b-8d3c-5d13d0890663 for this chassis. Feb 1 04:52:05 localhost ovn_controller[152787]: 2026-02-01T09:52:05Z|00031|binding|INFO|c8e9dce8-3cef-4d4b-8d3c-5d13d0890663: Claiming unknown Feb 1 04:52:05 localhost NetworkManager[5972]: [1769939525.5650] manager: (tapc8e9dce8-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/14) Feb 1 04:52:05 localhost systemd-udevd[303549]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:52:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:05.584 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.3/24', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-9c0246b2-3507-4017-b8dd-01251187a6c3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9c0246b2-3507-4017-b8dd-01251187a6c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebe5e345d591408fa955b2e811bfaffb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=722e7a10-7816-489f-9516-bc350daf9fce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=c8e9dce8-3cef-4d4b-8d3c-5d13d0890663) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:05.589 158655 INFO neutron.agent.ovn.metadata.agent [-] Port c8e9dce8-3cef-4d4b-8d3c-5d13d0890663 in datapath 9c0246b2-3507-4017-b8dd-01251187a6c3 bound to our chassis#033[00m Feb 1 04:52:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:05.592 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2f157b64-12ad-48f6-bd1f-788194f131e8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:52:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:05.592 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9c0246b2-3507-4017-b8dd-01251187a6c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:52:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:05.593 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[c92a974e-63e1-4f07-8d7c-c9b4c0784913]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:05 localhost ovn_controller[152787]: 2026-02-01T09:52:05Z|00032|binding|INFO|Setting lport c8e9dce8-3cef-4d4b-8d3c-5d13d0890663 ovn-installed in OVS Feb 1 04:52:05 localhost ovn_controller[152787]: 2026-02-01T09:52:05Z|00033|binding|INFO|Setting lport c8e9dce8-3cef-4d4b-8d3c-5d13d0890663 up in Southbound Feb 1 04:52:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:06 localhost podman[303602]: Feb 1 04:52:06 localhost podman[303602]: 2026-02-01 09:52:06.428244297 +0000 UTC m=+0.087317113 container create 23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c0246b2-3507-4017-b8dd-01251187a6c3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 1 04:52:06 localhost systemd[1]: Started libpod-conmon-23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0.scope. Feb 1 04:52:06 localhost systemd[1]: tmp-crun.HVqGBr.mount: Deactivated successfully. Feb 1 04:52:06 localhost podman[303602]: 2026-02-01 09:52:06.383828992 +0000 UTC m=+0.042901838 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:52:06 localhost systemd[1]: Started libcrun container. Feb 1 04:52:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53daeb5ecd744789a19f463b75866691ebb76af9c46948b934b77d0920c93713/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:52:06 localhost podman[303602]: 2026-02-01 09:52:06.515376213 +0000 UTC m=+0.174449039 container init 23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c0246b2-3507-4017-b8dd-01251187a6c3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:52:06 localhost podman[303602]: 2026-02-01 09:52:06.523975981 +0000 UTC m=+0.183048807 container start 23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c0246b2-3507-4017-b8dd-01251187a6c3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 1 04:52:06 localhost dnsmasq[303619]: started, version 2.85 cachesize 150 Feb 1 04:52:06 localhost dnsmasq[303619]: DNS service limited to local subnets Feb 1 04:52:06 localhost dnsmasq[303619]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:52:06 localhost dnsmasq[303619]: warning: no upstream servers configured Feb 1 04:52:06 localhost dnsmasq-dhcp[303619]: DHCP, static leases only on 19.80.0.0, lease time 1d Feb 1 04:52:06 localhost dnsmasq[303619]: read /var/lib/neutron/dhcp/9c0246b2-3507-4017-b8dd-01251187a6c3/addn_hosts - 0 addresses Feb 1 04:52:06 localhost dnsmasq-dhcp[303619]: read /var/lib/neutron/dhcp/9c0246b2-3507-4017-b8dd-01251187a6c3/host Feb 1 04:52:06 localhost dnsmasq-dhcp[303619]: read /var/lib/neutron/dhcp/9c0246b2-3507-4017-b8dd-01251187a6c3/opts Feb 1 04:52:06 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:06.567 259225 INFO neutron.agent.dhcp.agent [None req-d86e1c16-84a5-4dd8-a2ec-4aa9b3dd89ba - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:52:04Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d16170e5-2dd1-4d5e-a380-5344cdba0aa7, ip_allocation=immediate, mac_address=fa:16:3e:db:2d:9c, name=tempest-subport-491001553, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:52:02Z, description=, dns_domain=, id=9c0246b2-3507-4017-b8dd-01251187a6c3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-910372982, port_security_enabled=True, project_id=ebe5e345d591408fa955b2e811bfaffb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=95, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=271, status=ACTIVE, subnets=['4d0b3f04-7e3e-474c-9d7e-bf58f363cb51'], tags=[], tenant_id=ebe5e345d591408fa955b2e811bfaffb, updated_at=2026-02-01T09:52:03Z, vlan_transparent=None, network_id=9c0246b2-3507-4017-b8dd-01251187a6c3, port_security_enabled=True, project_id=ebe5e345d591408fa955b2e811bfaffb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f05aaf36-904c-44ae-a203-34e61744db7d'], standard_attr_id=276, status=DOWN, tags=[], tenant_id=ebe5e345d591408fa955b2e811bfaffb, updated_at=2026-02-01T09:52:04Z on network 9c0246b2-3507-4017-b8dd-01251187a6c3#033[00m Feb 1 04:52:06 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:06.674 259225 INFO neutron.agent.dhcp.agent [None req-11cea9ce-82f8-4593-a73b-419f2d4bec5d - - - - - -] DHCP configuration for ports {'8e91955e-c3fb-4309-8605-7dae9ca4cd95'} is completed#033[00m Feb 1 04:52:06 localhost dnsmasq[303619]: read /var/lib/neutron/dhcp/9c0246b2-3507-4017-b8dd-01251187a6c3/addn_hosts - 1 addresses Feb 1 04:52:06 localhost dnsmasq-dhcp[303619]: read /var/lib/neutron/dhcp/9c0246b2-3507-4017-b8dd-01251187a6c3/host Feb 1 04:52:06 localhost podman[303635]: 2026-02-01 09:52:06.825464659 +0000 UTC m=+0.059147425 container kill 23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c0246b2-3507-4017-b8dd-01251187a6c3, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:52:06 localhost dnsmasq-dhcp[303619]: read /var/lib/neutron/dhcp/9c0246b2-3507-4017-b8dd-01251187a6c3/opts Feb 1 04:52:07 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:07.835 259225 INFO neutron.agent.dhcp.agent [None req-63c1010f-df59-4e27-bee2-d2dcf01f190a - - - - - -] DHCP configuration for ports {'d16170e5-2dd1-4d5e-a380-5344cdba0aa7'} is completed#033[00m Feb 1 04:52:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v91: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:52:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:52:12 localhost podman[303657]: 2026-02-01 09:52:12.913003834 +0000 UTC m=+0.118946359 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:52:12 localhost systemd[1]: tmp-crun.BdsBs5.mount: Deactivated successfully. Feb 1 04:52:12 localhost podman[303656]: 2026-02-01 09:52:12.934920498 +0000 UTC m=+0.141237194 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:52:12 localhost podman[303657]: 2026-02-01 09:52:12.949707468 +0000 UTC m=+0.155649993 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:52:12 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:52:12 localhost podman[303656]: 2026-02-01 09:52:12.975726829 +0000 UTC m=+0.182043565 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 1 04:52:12 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.442238) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939533442277, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2518, "num_deletes": 252, "total_data_size": 6620019, "memory_usage": 6994848, "flush_reason": "Manual Compaction"} Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939533469189, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 4263172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14172, "largest_seqno": 16685, "table_properties": {"data_size": 4253747, "index_size": 5866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 21052, "raw_average_key_size": 21, "raw_value_size": 4234297, "raw_average_value_size": 4307, "num_data_blocks": 251, "num_entries": 983, "num_filter_entries": 983, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939360, "oldest_key_time": 1769939360, "file_creation_time": 1769939533, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 27010 microseconds, and 8590 cpu microseconds. Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.469246) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 4263172 bytes OK Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.469271) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.471144) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.471165) EVENT_LOG_v1 {"time_micros": 1769939533471160, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.471190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 6608621, prev total WAL file size 6608621, number of live WAL files 2. Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.472532) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4163KB)], [18(19MB)] Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939533472585, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 24787287, "oldest_snapshot_seqno": -1} Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 12258 keys, 21729359 bytes, temperature: kUnknown Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939533627180, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 21729359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 21659258, "index_size": 38384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30661, "raw_key_size": 329844, "raw_average_key_size": 26, "raw_value_size": 21450111, "raw_average_value_size": 1749, "num_data_blocks": 1453, "num_entries": 12258, "num_filter_entries": 12258, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769939533, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.627565) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 21729359 bytes Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.629397) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.2 rd, 140.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 19.6 +0.0 blob) out(20.7 +0.0 blob), read-write-amplify(10.9) write-amplify(5.1) OK, records in: 12792, records dropped: 534 output_compression: NoCompression Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.629427) EVENT_LOG_v1 {"time_micros": 1769939533629415, "job": 8, "event": "compaction_finished", "compaction_time_micros": 154747, "compaction_time_cpu_micros": 50939, "output_level": 6, "num_output_files": 1, "total_output_size": 21729359, "num_input_records": 12792, "num_output_records": 12258, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939533630533, "job": 8, "event": "table_file_deletion", "file_number": 20} Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939533633796, "job": 8, "event": "table_file_deletion", "file_number": 18} Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.472471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.633980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.633993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.633999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.634004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:52:13 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:52:13.634011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:52:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v93: 177 pgs: 177 active+clean; 145 MiB data, 707 MiB used, 41 GiB / 42 GiB avail Feb 1 04:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:52:17 localhost podman[303705]: 2026-02-01 09:52:17.867926895 +0000 UTC m=+0.078701915 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.expose-services=, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., version=9.7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, release=1769056855) Feb 1 04:52:17 localhost podman[303705]: 2026-02-01 09:52:17.911626467 +0000 UTC m=+0.122401437 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, distribution-scope=public, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, architecture=x86_64, version=9.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 04:52:17 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:52:17 localhost podman[303706]: 2026-02-01 09:52:17.918208293 +0000 UTC m=+0.125979708 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 1 04:52:18 localhost podman[303706]: 2026-02-01 09:52:18.000608261 +0000 UTC m=+0.208379696 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible) Feb 1 04:52:18 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:52:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 238 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 60 op/s Feb 1 04:52:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 238 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 60 op/s Feb 1 04:52:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:52:21 Feb 1 04:52:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:52:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:52:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['manila_metadata', 'manila_data', 'vms', '.mgr', 'backups', 'volumes', 'images'] Feb 1 04:52:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:52:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:52:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006291494364182021 of space, bias 1.0, pg target 1.2582988728364042 quantized to 32 (current 32) Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:52:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.0021628687418574354 quantized to 16 (current 16) Feb 1 04:52:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:52:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:52:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:52:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:52:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:52:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 238 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 60 op/s Feb 1 04:52:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 177 active+clean; 238 MiB data, 839 MiB used, 41 GiB / 42 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 179 op/s Feb 1 04:52:24 localhost nova_compute[274317]: 2026-02-01 09:52:24.272 274321 DEBUG nova.virt.libvirt.driver [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Creating tmpfile /var/lib/nova/instances/tmpm_4plr78 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Feb 1 04:52:24 localhost nova_compute[274317]: 2026-02-01 09:52:24.834 274321 DEBUG nova.compute.manager [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm_4plr78',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Feb 1 04:52:24 localhost nova_compute[274317]: 2026-02-01 09:52:24.860 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:52:24 localhost nova_compute[274317]: 2026-02-01 09:52:24.861 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:52:24 localhost nova_compute[274317]: 2026-02-01 09:52:24.872 274321 INFO nova.compute.rpcapi [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Feb 1 04:52:24 localhost nova_compute[274317]: 2026-02-01 09:52:24.873 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:52:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:52:25 localhost podman[303742]: 2026-02-01 09:52:25.867347253 +0000 UTC m=+0.082758120 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:52:25 localhost podman[303742]: 2026-02-01 09:52:25.876439427 +0000 UTC m=+0.091850284 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:52:25 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:52:25 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:25.963 259225 INFO neutron.agent.linux.ip_lib [None req-f3b93c3d-7229-4877-93ce-2ef790c80c9d - - - - - -] Device tap9ba17182-29 cannot be used as it has no MAC address#033[00m Feb 1 04:52:25 localhost kernel: device tap9ba17182-29 entered promiscuous mode Feb 1 04:52:25 localhost NetworkManager[5972]: [1769939545.9898] manager: (tap9ba17182-29): new Generic device (/org/freedesktop/NetworkManager/Devices/15) Feb 1 04:52:25 localhost ovn_controller[152787]: 2026-02-01T09:52:25Z|00034|binding|INFO|Claiming lport 9ba17182-297c-4dca-a0cf-d9bfe1422e70 for this chassis. Feb 1 04:52:25 localhost ovn_controller[152787]: 2026-02-01T09:52:25Z|00035|binding|INFO|9ba17182-297c-4dca-a0cf-d9bfe1422e70: Claiming unknown Feb 1 04:52:25 localhost systemd-udevd[303772]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:52:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:25.998 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-1a91bc36-a078-4e5e-bd8f-3f791a7ad269', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a91bc36-a078-4e5e-bd8f-3f791a7ad269', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a75b32a03c2b49f0927f81d1bf3f53d7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c10e3803-7396-403d-9d9d-ba485ed9d9b4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9ba17182-297c-4dca-a0cf-d9bfe1422e70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:25.999 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9ba17182-297c-4dca-a0cf-d9bfe1422e70 in datapath 1a91bc36-a078-4e5e-bd8f-3f791a7ad269 bound to our chassis#033[00m Feb 1 04:52:26 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:26.003 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port f6ad9985-5263-405a-b8c4-55a01e5f2ffe IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:52:26 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:26.003 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1a91bc36-a078-4e5e-bd8f-3f791a7ad269, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:52:26 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:26.003 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3ef1b8-8919-4579-8ce9-b1460ef6fa02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:26 localhost ovn_controller[152787]: 2026-02-01T09:52:26Z|00036|binding|INFO|Setting lport 9ba17182-297c-4dca-a0cf-d9bfe1422e70 ovn-installed in OVS Feb 1 04:52:26 localhost ovn_controller[152787]: 2026-02-01T09:52:26Z|00037|binding|INFO|Setting lport 9ba17182-297c-4dca-a0cf-d9bfe1422e70 up in Southbound Feb 1 04:52:26 localhost journal[224955]: ethtool ioctl error on tap9ba17182-29: No such device Feb 1 04:52:26 localhost journal[224955]: ethtool ioctl error on tap9ba17182-29: No such device Feb 1 04:52:26 localhost journal[224955]: ethtool ioctl error on tap9ba17182-29: No such device Feb 1 04:52:26 localhost journal[224955]: ethtool ioctl error on tap9ba17182-29: No such device Feb 1 04:52:26 localhost journal[224955]: ethtool ioctl error on tap9ba17182-29: No such device Feb 1 04:52:26 localhost journal[224955]: ethtool ioctl error on tap9ba17182-29: No such device Feb 1 04:52:26 localhost journal[224955]: ethtool ioctl error on tap9ba17182-29: No such device Feb 1 04:52:26 localhost journal[224955]: ethtool ioctl error on tap9ba17182-29: No such device Feb 1 04:52:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 238 MiB data, 839 MiB used, 41 GiB / 42 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 179 op/s Feb 1 04:52:26 localhost nova_compute[274317]: 2026-02-01 09:52:26.795 274321 DEBUG nova.compute.manager [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm_4plr78',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5aefea54-941a-48bf-ad9e-7f13fdfdb4ed',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Feb 1 04:52:26 localhost nova_compute[274317]: 2026-02-01 09:52:26.824 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Acquiring lock "refresh_cache-5aefea54-941a-48bf-ad9e-7f13fdfdb4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:52:26 localhost nova_compute[274317]: 2026-02-01 09:52:26.825 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Acquired lock "refresh_cache-5aefea54-941a-48bf-ad9e-7f13fdfdb4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:52:26 localhost nova_compute[274317]: 2026-02-01 09:52:26.825 274321 DEBUG nova.network.neutron [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 1 04:52:26 localhost podman[303844]: Feb 1 04:52:26 localhost podman[303844]: 2026-02-01 09:52:26.856483326 +0000 UTC m=+0.091481872 container create 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127) Feb 1 04:52:26 localhost systemd[1]: Started libpod-conmon-1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38.scope. Feb 1 04:52:26 localhost podman[303844]: 2026-02-01 09:52:26.81203022 +0000 UTC m=+0.047028746 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:52:26 localhost systemd[1]: Started libcrun container. Feb 1 04:52:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44b6a4c8167cba5b5fbdfbf9820bb6cd4a6fbd5e72379076f4e21cd139706606/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:52:26 localhost podman[303844]: 2026-02-01 09:52:26.953247842 +0000 UTC m=+0.188246338 container init 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:52:26 localhost podman[303844]: 2026-02-01 09:52:26.958995871 +0000 UTC m=+0.193994377 container start 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS) Feb 1 04:52:26 localhost dnsmasq[303862]: started, version 2.85 cachesize 150 Feb 1 04:52:26 localhost dnsmasq[303862]: DNS service limited to local subnets Feb 1 04:52:26 localhost dnsmasq[303862]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:52:26 localhost dnsmasq[303862]: warning: no upstream servers configured Feb 1 04:52:26 localhost dnsmasq-dhcp[303862]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:52:26 localhost dnsmasq[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/addn_hosts - 0 addresses Feb 1 04:52:26 localhost dnsmasq-dhcp[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/host Feb 1 04:52:26 localhost dnsmasq-dhcp[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/opts Feb 1 04:52:27 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:27.064 259225 INFO neutron.agent.dhcp.agent [None req-3479b879-d760-4820-af00-c8479ea8edce - - - - - -] DHCP configuration for ports {'a095506e-75b5-4165-a26f-acc54923bd6f'} is completed#033[00m Feb 1 04:52:27 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:27.159 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:52:26Z, description=, device_id=f4abece4-f6f1-47bb-a6bc-1160a4cf7739, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=37cf06b3-ab28-46e6-8f77-67f52e288c13, ip_allocation=immediate, mac_address=fa:16:3e:26:e3:7c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:52:23Z, description=, dns_domain=, id=1a91bc36-a078-4e5e-bd8f-3f791a7ad269, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-129983686-network, port_security_enabled=True, project_id=a75b32a03c2b49f0927f81d1bf3f53d7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48951, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=431, status=ACTIVE, subnets=['b8fa8a7c-8ae5-460f-a6c7-4e652af56379'], tags=[], tenant_id=a75b32a03c2b49f0927f81d1bf3f53d7, updated_at=2026-02-01T09:52:24Z, vlan_transparent=None, network_id=1a91bc36-a078-4e5e-bd8f-3f791a7ad269, port_security_enabled=False, project_id=a75b32a03c2b49f0927f81d1bf3f53d7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=455, status=DOWN, tags=[], tenant_id=a75b32a03c2b49f0927f81d1bf3f53d7, updated_at=2026-02-01T09:52:27Z on network 1a91bc36-a078-4e5e-bd8f-3f791a7ad269#033[00m Feb 1 04:52:27 localhost dnsmasq[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/addn_hosts - 1 addresses Feb 1 04:52:27 localhost podman[303881]: 2026-02-01 09:52:27.362583052 +0000 UTC m=+0.057736961 container kill 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:52:27 localhost dnsmasq-dhcp[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/host Feb 1 04:52:27 localhost dnsmasq-dhcp[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/opts Feb 1 04:52:27 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:27.633 259225 INFO neutron.agent.dhcp.agent [None req-130b50b6-1d53-46d0-a999-4088b2e200fe - - - - - -] DHCP configuration for ports {'37cf06b3-ab28-46e6-8f77-67f52e288c13'} is completed#033[00m Feb 1 04:52:27 localhost nova_compute[274317]: 2026-02-01 09:52:27.974 274321 DEBUG nova.network.neutron [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Updating instance_info_cache with network_info: [{"id": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "address": "fa:16:3e:6e:4d:83", "network": {"id": "01cb494b-1310-460f-acbe-602aefea39c6", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1791362587-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ebe5e345d591408fa955b2e811bfaffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96aeb3a2-ba", "ovs_interfaceid": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.039 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Releasing lock "refresh_cache-5aefea54-941a-48bf-ad9e-7f13fdfdb4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.042 274321 DEBUG nova.virt.libvirt.driver [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm_4plr78',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5aefea54-941a-48bf-ad9e-7f13fdfdb4ed',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.043 274321 DEBUG nova.virt.libvirt.driver [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Creating instance directory: /var/lib/nova/instances/5aefea54-941a-48bf-ad9e-7f13fdfdb4ed pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.044 274321 DEBUG nova.virt.libvirt.driver [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Ensure instance console log exists: /var/lib/nova/instances/5aefea54-941a-48bf-ad9e-7f13fdfdb4ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.045 274321 DEBUG nova.virt.libvirt.driver [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.046 274321 DEBUG nova.virt.libvirt.vif [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-01T09:52:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-328365138',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005604213.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-328365138',id=7,image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-01T09:52:21Z,launched_on='np0005604213.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005604213.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ebe5e345d591408fa955b2e811bfaffb',ramdisk_id='',reservation_id='r-hz7zc7vw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1924784790',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1924784790-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2026-02-01T09:52:21Z,user_data=None,user_id='336655b6a22d4371b0a5cd24b959dc9a',uuid=5aefea54-941a-48bf-ad9e-7f13fdfdb4ed,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "address": "fa:16:3e:6e:4d:83", "network": {"id": "01cb494b-1310-460f-acbe-602aefea39c6", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1791362587-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ebe5e345d591408fa955b2e811bfaffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap96aeb3a2-ba", "ovs_interfaceid": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.047 274321 DEBUG nova.network.os_vif_util [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Converting VIF {"id": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "address": "fa:16:3e:6e:4d:83", "network": {"id": "01cb494b-1310-460f-acbe-602aefea39c6", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1791362587-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ebe5e345d591408fa955b2e811bfaffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap96aeb3a2-ba", "ovs_interfaceid": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.048 274321 DEBUG nova.network.os_vif_util [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:4d:83,bridge_name='br-int',has_traffic_filtering=True,id=96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa,network=Network(01cb494b-1310-460f-acbe-602aefea39c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap96aeb3a2-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.049 274321 DEBUG os_vif [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:4d:83,bridge_name='br-int',has_traffic_filtering=True,id=96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa,network=Network(01cb494b-1310-460f-acbe-602aefea39c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap96aeb3a2-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.099 274321 DEBUG ovsdbapp.backend.ovs_idl [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.099 274321 DEBUG ovsdbapp.backend.ovs_idl [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.100 274321 DEBUG ovsdbapp.backend.ovs_idl [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.101 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.101 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.102 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.103 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.105 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.108 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.130 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.130 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.130 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.131 274321 INFO oslo.privsep.daemon [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp2vvhpzoa/privsep.sock']#033[00m Feb 1 04:52:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 238 MiB data, 839 MiB used, 41 GiB / 42 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 208 op/s Feb 1 04:52:28 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:28.431 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:52:26Z, description=, device_id=f4abece4-f6f1-47bb-a6bc-1160a4cf7739, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=37cf06b3-ab28-46e6-8f77-67f52e288c13, ip_allocation=immediate, mac_address=fa:16:3e:26:e3:7c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:52:23Z, description=, dns_domain=, id=1a91bc36-a078-4e5e-bd8f-3f791a7ad269, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-129983686-network, port_security_enabled=True, project_id=a75b32a03c2b49f0927f81d1bf3f53d7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48951, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=431, status=ACTIVE, subnets=['b8fa8a7c-8ae5-460f-a6c7-4e652af56379'], tags=[], tenant_id=a75b32a03c2b49f0927f81d1bf3f53d7, updated_at=2026-02-01T09:52:24Z, vlan_transparent=None, network_id=1a91bc36-a078-4e5e-bd8f-3f791a7ad269, port_security_enabled=False, project_id=a75b32a03c2b49f0927f81d1bf3f53d7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=455, status=DOWN, tags=[], tenant_id=a75b32a03c2b49f0927f81d1bf3f53d7, updated_at=2026-02-01T09:52:27Z on network 1a91bc36-a078-4e5e-bd8f-3f791a7ad269#033[00m Feb 1 04:52:28 localhost systemd[1]: tmp-crun.RMAzWF.mount: Deactivated successfully. Feb 1 04:52:28 localhost dnsmasq[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/addn_hosts - 1 addresses Feb 1 04:52:28 localhost dnsmasq-dhcp[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/host Feb 1 04:52:28 localhost dnsmasq-dhcp[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/opts Feb 1 04:52:28 localhost podman[303920]: 2026-02-01 09:52:28.657751104 +0000 UTC m=+0.071134058 container kill 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.731 274321 INFO oslo.privsep.daemon [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Spawned new privsep daemon via rootwrap#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.629 303931 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.635 303931 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.638 303931 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Feb 1 04:52:28 localhost nova_compute[274317]: 2026-02-01 09:52:28.638 303931 INFO oslo.privsep.daemon [-] privsep daemon running as pid 303931#033[00m Feb 1 04:52:28 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:28.908 259225 INFO neutron.agent.dhcp.agent [None req-467565a1-31c4-4904-a19f-829412b5f22d - - - - - -] DHCP configuration for ports {'37cf06b3-ab28-46e6-8f77-67f52e288c13'} is completed#033[00m Feb 1 04:52:29 localhost nova_compute[274317]: 2026-02-01 09:52:29.014 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:29 localhost nova_compute[274317]: 2026-02-01 09:52:29.014 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96aeb3a2-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:29 localhost nova_compute[274317]: 2026-02-01 09:52:29.015 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96aeb3a2-ba, col_values=(('external_ids', {'iface-id': '96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:4d:83', 'vm-uuid': '5aefea54-941a-48bf-ad9e-7f13fdfdb4ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:29 localhost nova_compute[274317]: 2026-02-01 09:52:29.068 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:29 localhost nova_compute[274317]: 2026-02-01 09:52:29.071 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 1 04:52:29 localhost nova_compute[274317]: 2026-02-01 09:52:29.074 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:29 localhost nova_compute[274317]: 2026-02-01 09:52:29.076 274321 INFO os_vif [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:4d:83,bridge_name='br-int',has_traffic_filtering=True,id=96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa,network=Network(01cb494b-1310-460f-acbe-602aefea39c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap96aeb3a2-ba')#033[00m Feb 1 04:52:29 localhost nova_compute[274317]: 2026-02-01 09:52:29.077 274321 DEBUG nova.virt.libvirt.driver [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Feb 1 04:52:29 localhost nova_compute[274317]: 2026-02-01 09:52:29.078 274321 DEBUG nova.compute.manager [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm_4plr78',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5aefea54-941a-48bf-ad9e-7f13fdfdb4ed',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Feb 1 04:52:30 localhost podman[236852]: time="2026-02-01T09:52:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:52:30 localhost podman[236852]: @ - - [01/Feb/2026:09:52:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159002 "" "Go-http-client/1.1" Feb 1 04:52:30 localhost podman[236852]: @ - - [01/Feb/2026:09:52:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19215 "" "Go-http-client/1.1" Feb 1 04:52:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 238 MiB data, 839 MiB used, 41 GiB / 42 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s Feb 1 04:52:30 localhost ovn_controller[152787]: 2026-02-01T09:52:30Z|00038|ovn_bfd|INFO|Enabled BFD on interface ovn-2186fb-0 Feb 1 04:52:30 localhost ovn_controller[152787]: 2026-02-01T09:52:30Z|00039|ovn_bfd|INFO|Enabled BFD on interface ovn-e1cc33-0 Feb 1 04:52:30 localhost ovn_controller[152787]: 2026-02-01T09:52:30Z|00040|ovn_bfd|INFO|Enabled BFD on interface ovn-45aa31-0 Feb 1 04:52:30 localhost nova_compute[274317]: 2026-02-01 09:52:30.610 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:30 localhost nova_compute[274317]: 2026-02-01 09:52:30.618 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:30 localhost nova_compute[274317]: 2026-02-01 09:52:30.621 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:30 localhost nova_compute[274317]: 2026-02-01 09:52:30.697 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:52:31 localhost podman[303966]: 2026-02-01 09:52:31.051125677 +0000 UTC m=+0.084534877 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:52:31 localhost podman[303966]: 2026-02-01 09:52:31.060329824 +0000 UTC m=+0.093739014 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:52:31 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:52:31 localhost nova_compute[274317]: 2026-02-01 09:52:31.194 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:31 localhost nova_compute[274317]: 2026-02-01 09:52:31.469 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:31 localhost openstack_network_exporter[239388]: ERROR 09:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:52:31 localhost openstack_network_exporter[239388]: Feb 1 04:52:31 localhost openstack_network_exporter[239388]: ERROR 09:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:52:31 localhost openstack_network_exporter[239388]: Feb 1 04:52:31 localhost nova_compute[274317]: 2026-02-01 09:52:31.590 274321 DEBUG nova.network.neutron [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Port 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa updated with migration profile {'migrating_to': 'np0005604215.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Feb 1 04:52:31 localhost nova_compute[274317]: 2026-02-01 09:52:31.593 274321 DEBUG nova.compute.manager [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm_4plr78',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5aefea54-941a-48bf-ad9e-7f13fdfdb4ed',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Feb 1 04:52:31 localhost nova_compute[274317]: 2026-02-01 09:52:31.612 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:31 localhost nova_compute[274317]: 2026-02-01 09:52:31.689 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:52:31 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:52:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:52:31 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:52:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:52:31 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 688c4493-3079-406c-84fe-5a7e2c84e7cc (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:52:31 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 688c4493-3079-406c-84fe-5a7e2c84e7cc (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:52:31 localhost ceph-mgr[278126]: [progress INFO root] Completed event 688c4493-3079-406c-84fe-5a7e2c84e7cc (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:52:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:52:31 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:52:31 localhost sshd[304039]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:52:31 localhost systemd[1]: Created slice User Slice of UID 42436. Feb 1 04:52:31 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Feb 1 04:52:31 localhost systemd-logind[761]: New session 76 of user nova. Feb 1 04:52:31 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Feb 1 04:52:31 localhost systemd[1]: Starting User Manager for UID 42436... Feb 1 04:52:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 238 MiB data, 839 MiB used, 41 GiB / 42 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s Feb 1 04:52:32 localhost systemd[304043]: Queued start job for default target Main User Target. Feb 1 04:52:32 localhost systemd[304043]: Created slice User Application Slice. Feb 1 04:52:32 localhost systemd[304043]: Started Mark boot as successful after the user session has run 2 minutes. Feb 1 04:52:32 localhost systemd[304043]: Started Daily Cleanup of User's Temporary Directories. Feb 1 04:52:32 localhost systemd[304043]: Reached target Paths. Feb 1 04:52:32 localhost systemd[304043]: Reached target Timers. Feb 1 04:52:32 localhost systemd[304043]: Starting D-Bus User Message Bus Socket... Feb 1 04:52:32 localhost systemd[304043]: Starting Create User's Volatile Files and Directories... Feb 1 04:52:32 localhost systemd[304043]: Listening on D-Bus User Message Bus Socket. Feb 1 04:52:32 localhost systemd[304043]: Reached target Sockets. Feb 1 04:52:32 localhost systemd[304043]: Finished Create User's Volatile Files and Directories. Feb 1 04:52:32 localhost systemd[304043]: Reached target Basic System. Feb 1 04:52:32 localhost systemd[304043]: Reached target Main User Target. Feb 1 04:52:32 localhost systemd[304043]: Startup finished in 150ms. Feb 1 04:52:32 localhost systemd[1]: Started User Manager for UID 42436. Feb 1 04:52:32 localhost systemd[1]: Started Session 76 of User nova. Feb 1 04:52:32 localhost systemd[1]: Started libvirt secret daemon. Feb 1 04:52:32 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:52:32 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:52:32 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Feb 1 04:52:32 localhost kernel: device tap96aeb3a2-ba entered promiscuous mode Feb 1 04:52:32 localhost NetworkManager[5972]: [1769939552.4201] manager: (tap96aeb3a2-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/16) Feb 1 04:52:32 localhost ovn_controller[152787]: 2026-02-01T09:52:32Z|00041|binding|INFO|Claiming lport 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa for this additional chassis. Feb 1 04:52:32 localhost ovn_controller[152787]: 2026-02-01T09:52:32Z|00042|binding|INFO|96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa: Claiming fa:16:3e:6e:4d:83 10.100.0.11 Feb 1 04:52:32 localhost ovn_controller[152787]: 2026-02-01T09:52:32Z|00043|binding|INFO|Claiming lport d16170e5-2dd1-4d5e-a380-5344cdba0aa7 for this additional chassis. Feb 1 04:52:32 localhost ovn_controller[152787]: 2026-02-01T09:52:32Z|00044|binding|INFO|d16170e5-2dd1-4d5e-a380-5344cdba0aa7: Claiming fa:16:3e:db:2d:9c 19.80.0.33 Feb 1 04:52:32 localhost systemd-udevd[304112]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:52:32 localhost nova_compute[274317]: 2026-02-01 09:52:32.424 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:32 localhost nova_compute[274317]: 2026-02-01 09:52:32.432 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:32 localhost NetworkManager[5972]: [1769939552.4437] device (tap96aeb3a2-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 1 04:52:32 localhost NetworkManager[5972]: [1769939552.4446] device (tap96aeb3a2-ba): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Feb 1 04:52:32 localhost nova_compute[274317]: 2026-02-01 09:52:32.480 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:32 localhost systemd-machined[202466]: New machine qemu-1-instance-00000007. Feb 1 04:52:32 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000007. Feb 1 04:52:32 localhost ovn_controller[152787]: 2026-02-01T09:52:32Z|00045|binding|INFO|Setting lport 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa ovn-installed in OVS Feb 1 04:52:32 localhost nova_compute[274317]: 2026-02-01 09:52:32.497 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:32 localhost nova_compute[274317]: 2026-02-01 09:52:32.812 274321 DEBUG nova.virt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:52:32 localhost nova_compute[274317]: 2026-02-01 09:52:32.812 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] VM Started (Lifecycle Event)#033[00m Feb 1 04:52:33 localhost nova_compute[274317]: 2026-02-01 09:52:33.001 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:52:33 localhost nova_compute[274317]: 2026-02-01 09:52:33.032 274321 DEBUG nova.virt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:52:33 localhost nova_compute[274317]: 2026-02-01 09:52:33.033 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] VM Resumed (Lifecycle Event)#033[00m Feb 1 04:52:33 localhost nova_compute[274317]: 2026-02-01 09:52:33.068 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:52:33 localhost nova_compute[274317]: 2026-02-01 09:52:33.074 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 1 04:52:33 localhost nova_compute[274317]: 2026-02-01 09:52:33.101 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] During the sync_power process the instance has moved from host np0005604213.localdomain to host np0005604215.localdomain#033[00m Feb 1 04:52:33 localhost dnsmasq[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/addn_hosts - 0 addresses Feb 1 04:52:33 localhost dnsmasq-dhcp[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/host Feb 1 04:52:33 localhost dnsmasq-dhcp[303862]: read /var/lib/neutron/dhcp/1a91bc36-a078-4e5e-bd8f-3f791a7ad269/opts Feb 1 04:52:33 localhost podman[304182]: 2026-02-01 09:52:33.283624016 +0000 UTC m=+0.062927662 container kill 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:52:33 localhost systemd[1]: session-76.scope: Deactivated successfully. Feb 1 04:52:33 localhost systemd-logind[761]: Session 76 logged out. Waiting for processes to exit. Feb 1 04:52:33 localhost systemd-logind[761]: Removed session 76. Feb 1 04:52:33 localhost kernel: device tap9ba17182-29 left promiscuous mode Feb 1 04:52:33 localhost nova_compute[274317]: 2026-02-01 09:52:33.693 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:33 localhost ovn_controller[152787]: 2026-02-01T09:52:33Z|00046|binding|INFO|Releasing lport 9ba17182-297c-4dca-a0cf-d9bfe1422e70 from this chassis (sb_readonly=0) Feb 1 04:52:33 localhost ovn_controller[152787]: 2026-02-01T09:52:33Z|00047|binding|INFO|Setting lport 9ba17182-297c-4dca-a0cf-d9bfe1422e70 down in Southbound Feb 1 04:52:33 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:33.704 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-1a91bc36-a078-4e5e-bd8f-3f791a7ad269', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a91bc36-a078-4e5e-bd8f-3f791a7ad269', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a75b32a03c2b49f0927f81d1bf3f53d7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c10e3803-7396-403d-9d9d-ba485ed9d9b4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9ba17182-297c-4dca-a0cf-d9bfe1422e70) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:33 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:33.706 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9ba17182-297c-4dca-a0cf-d9bfe1422e70 in datapath 1a91bc36-a078-4e5e-bd8f-3f791a7ad269 unbound from our chassis#033[00m Feb 1 04:52:33 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:33.710 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1a91bc36-a078-4e5e-bd8f-3f791a7ad269, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:52:33 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:33.711 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1428b3-8d02-4a0d-b9bd-5425f08ed35c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:33 localhost nova_compute[274317]: 2026-02-01 09:52:33.722 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e94 e94: 6 total, 6 up, 6 in Feb 1 04:52:34 localhost nova_compute[274317]: 2026-02-01 09:52:34.104 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v103: 177 pgs: 177 active+clean; 292 MiB data, 987 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 5.0 MiB/s wr, 164 op/s Feb 1 04:52:34 localhost ovn_controller[152787]: 2026-02-01T09:52:34Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:4d:83 10.100.0.11 Feb 1 04:52:34 localhost ovn_controller[152787]: 2026-02-01T09:52:34Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:4d:83 10.100.0.11 Feb 1 04:52:34 localhost snmpd[67757]: empty variable list in _query Feb 1 04:52:34 localhost ovn_controller[152787]: 2026-02-01T09:52:34Z|00048|binding|INFO|Claiming lport 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa for this chassis. Feb 1 04:52:34 localhost ovn_controller[152787]: 2026-02-01T09:52:34Z|00049|binding|INFO|96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa: Claiming fa:16:3e:6e:4d:83 10.100.0.11 Feb 1 04:52:34 localhost ovn_controller[152787]: 2026-02-01T09:52:34Z|00050|binding|INFO|Claiming lport d16170e5-2dd1-4d5e-a380-5344cdba0aa7 for this chassis. Feb 1 04:52:34 localhost ovn_controller[152787]: 2026-02-01T09:52:34Z|00051|binding|INFO|d16170e5-2dd1-4d5e-a380-5344cdba0aa7: Claiming fa:16:3e:db:2d:9c 19.80.0.33 Feb 1 04:52:34 localhost ovn_controller[152787]: 2026-02-01T09:52:34Z|00052|binding|INFO|Setting lport 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa up in Southbound Feb 1 04:52:34 localhost ovn_controller[152787]: 2026-02-01T09:52:34Z|00053|binding|INFO|Setting lport d16170e5-2dd1-4d5e-a380-5344cdba0aa7 up in Southbound Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.551 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:2d:9c 19.80.0.33'], port_security=['fa:16:3e:db:2d:9c 19.80.0.33'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-491001553', 'neutron:cidrs': '19.80.0.33/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9c0246b2-3507-4017-b8dd-01251187a6c3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-491001553', 'neutron:project_id': 'ebe5e345d591408fa955b2e811bfaffb', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'f05aaf36-904c-44ae-a203-34e61744db7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=722e7a10-7816-489f-9516-bc350daf9fce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=d16170e5-2dd1-4d5e-a380-5344cdba0aa7) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.554 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:4d:83 10.100.0.11'], port_security=['fa:16:3e:6e:4d:83 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-377096059', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5aefea54-941a-48bf-ad9e-7f13fdfdb4ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01cb494b-1310-460f-acbe-602aefea39c6', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-377096059', 'neutron:project_id': 'ebe5e345d591408fa955b2e811bfaffb', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'f05aaf36-904c-44ae-a203-34e61744db7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604213.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae7d4c2f-1d19-4933-99fa-b8aa62feb38e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.556 158655 INFO neutron.agent.ovn.metadata.agent [-] Port d16170e5-2dd1-4d5e-a380-5344cdba0aa7 in datapath 9c0246b2-3507-4017-b8dd-01251187a6c3 bound to our chassis#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.560 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2f157b64-12ad-48f6-bd1f-788194f131e8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.561 158655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9c0246b2-3507-4017-b8dd-01251187a6c3#033[00m Feb 1 04:52:34 localhost neutron_sriov_agent[252054]: 2026-02-01 09:52:34.772 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-7ebd8498-3f44-4651-9102-d6c4dae99d3c req-210d032b-cdb2-4687-85bd-53137bd4893b 0156acb7bf9847849608ca90a8674720 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] This port is not SRIOV, skip binding for port 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa.#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.972 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[14e41bcd-43c2-4679-99e7-973c272af255]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.974 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9c0246b2-31 in ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.975 303130 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9c0246b2-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.975 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[571caa3d-dfca-4ff5-b644-558f2f8425a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.977 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[1d553cdf-dfe3-475b-8719-bbe284a0c0b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:34.998 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[13e64cd0-b4ea-43c0-bd14-ffc38f3b64c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:35.009 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[edc1753c-5368-42c4-9cfd-284bb2424772]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:35.011 158655 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp27n_5egp/privsep.sock']#033[00m Feb 1 04:52:35 localhost nova_compute[274317]: 2026-02-01 09:52:35.061 274321 INFO nova.compute.manager [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Post operation of migration started#033[00m Feb 1 04:52:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:35 localhost nova_compute[274317]: 2026-02-01 09:52:35.228 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Acquiring lock "refresh_cache-5aefea54-941a-48bf-ad9e-7f13fdfdb4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:52:35 localhost nova_compute[274317]: 2026-02-01 09:52:35.229 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Acquired lock "refresh_cache-5aefea54-941a-48bf-ad9e-7f13fdfdb4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:52:35 localhost nova_compute[274317]: 2026-02-01 09:52:35.229 274321 DEBUG nova.network.neutron [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 1 04:52:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:35.625 158655 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 1 04:52:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:35.626 158655 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp27n_5egp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Feb 1 04:52:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:35.519 304214 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 1 04:52:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:35.525 304214 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 1 04:52:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:35.528 304214 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Feb 1 04:52:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:35.529 304214 INFO oslo.privsep.daemon [-] privsep daemon running as pid 304214#033[00m Feb 1 04:52:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:35.630 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[4d59efa9-45c2-4067-b997-c11d80440add]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e95 e95: 6 total, 6 up, 6 in Feb 1 04:52:35 localhost nova_compute[274317]: 2026-02-01 09:52:35.857 274321 DEBUG nova.network.neutron [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Updating instance_info_cache with network_info: [{"id": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "address": "fa:16:3e:6e:4d:83", "network": {"id": "01cb494b-1310-460f-acbe-602aefea39c6", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1791362587-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ebe5e345d591408fa955b2e811bfaffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96aeb3a2-ba", "ovs_interfaceid": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 1 04:52:35 localhost nova_compute[274317]: 2026-02-01 09:52:35.986 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Releasing lock "refresh_cache-5aefea54-941a-48bf-ad9e-7f13fdfdb4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.042 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.043 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.043 274321 DEBUG oslo_concurrency.lockutils [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.051 274321 INFO nova.virt.libvirt.driver [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.051 304214 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.052 304214 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.052 304214 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:36 localhost journal[224673]: Domain id=1 name='instance-00000007' uuid=5aefea54-941a-48bf-ad9e-7f13fdfdb4ed is tainted: custom-monitor Feb 1 04:52:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 292 MiB data, 987 MiB used, 41 GiB / 42 GiB avail; 852 KiB/s rd, 6.3 MiB/s wr, 161 op/s Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.226 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:36 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:52:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.545 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[e7eea43a-4de5-4d2d-bfad-9f059f5ac240]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost systemd-udevd[304224]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:52:36 localhost NetworkManager[5972]: [1769939556.5768] manager: (tap9c0246b2-30): new Veth device (/org/freedesktop/NetworkManager/Devices/17) Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.571 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[7626a49d-ed00-48c4-966a-074d73e1cfe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.614 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[baa897f7-8275-4887-9a38-1a343785b2e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.618 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f3573a-338a-4695-8ff7-7161b5154b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9c0246b2-31: link becomes ready Feb 1 04:52:36 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9c0246b2-30: link becomes ready Feb 1 04:52:36 localhost NetworkManager[5972]: [1769939556.6387] device (tap9c0246b2-30): carrier: link connected Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.642 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfa03a8-535c-4500-9e43-66f76ee3ad17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.662 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee79063-000d-450c-9117-4a2c09da834d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9c0246b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:90:aa:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1164074, 'reachable_time': 15046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304244, 'error': None, 'target': 'ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.678 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ebba61ea-027e-49b9-9332-1d942e8bf1cf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:aa37'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1164074, 'tstamp': 1164074}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304245, 'error': None, 'target': 'ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.695 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[7fecdd7f-048a-4dfa-9b35-f6fe39545552]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9c0246b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:90:aa:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1164074, 'reachable_time': 15046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304246, 'error': None, 'target': 'ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.722 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[68714008-38b9-42ec-8c3d-93ab475d3ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.765 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[651633e6-78d6-4c0b-9d5d-7a2568a62750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.767 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c0246b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.769 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.770 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c0246b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:36 localhost kernel: device tap9c0246b2-30 entered promiscuous mode Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.776 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.780 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.780 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9c0246b2-30, col_values=(('external_ids', {'iface-id': '8e91955e-c3fb-4309-8605-7dae9ca4cd95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.783 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:36 localhost ovn_controller[152787]: 2026-02-01T09:52:36Z|00054|binding|INFO|Releasing lport 8e91955e-c3fb-4309-8605-7dae9ca4cd95 from this chassis (sb_readonly=0) Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.787 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.788 158655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9c0246b2-3507-4017-b8dd-01251187a6c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9c0246b2-3507-4017-b8dd-01251187a6c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.789 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[444abee7-bd02-4fef-b2cc-c5b31a65cc79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.790 158655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: global Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: log /dev/log local0 debug Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: log-tag haproxy-metadata-proxy-9c0246b2-3507-4017-b8dd-01251187a6c3 Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: user root Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: group root Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: maxconn 1024 Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: pidfile /var/lib/neutron/external/pids/9c0246b2-3507-4017-b8dd-01251187a6c3.pid.haproxy Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: daemon Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: defaults Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: log global Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: mode http Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: option httplog Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: option dontlognull Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: option http-server-close Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: option forwardfor Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: retries 3 Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: timeout http-request 30s Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: timeout connect 30s Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: timeout client 32s Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: timeout server 32s Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: timeout http-keep-alive 30s Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: listen listener Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: bind 169.254.169.254:80 Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: server metadata /var/lib/neutron/metadata_proxy Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: http-request add-header X-OVN-Network-ID 9c0246b2-3507-4017-b8dd-01251187a6c3 Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 1 04:52:36 localhost nova_compute[274317]: 2026-02-01 09:52:36.790 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:36 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:36.791 158655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3', 'env', 'PROCESS_TAG=haproxy-9c0246b2-3507-4017-b8dd-01251187a6c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9c0246b2-3507-4017-b8dd-01251187a6c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 1 04:52:36 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:52:37 localhost nova_compute[274317]: 2026-02-01 09:52:37.061 274321 INFO nova.virt.libvirt.driver [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Feb 1 04:52:37 localhost podman[304279]: Feb 1 04:52:37 localhost podman[304279]: 2026-02-01 09:52:37.265186705 +0000 UTC m=+0.089766069 container create 905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 1 04:52:37 localhost ovn_controller[152787]: 2026-02-01T09:52:37Z|00055|binding|INFO|Releasing lport 8e91955e-c3fb-4309-8605-7dae9ca4cd95 from this chassis (sb_readonly=0) Feb 1 04:52:37 localhost podman[304279]: 2026-02-01 09:52:37.223586088 +0000 UTC m=+0.048165502 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 1 04:52:37 localhost nova_compute[274317]: 2026-02-01 09:52:37.344 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:37 localhost systemd[1]: Started libpod-conmon-905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb.scope. Feb 1 04:52:37 localhost systemd[1]: Started libcrun container. Feb 1 04:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99c826e4ffdecd670f48080610e81d3245f462deda0b0580ae2ad15e879a9a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:52:37 localhost podman[304279]: 2026-02-01 09:52:37.377660521 +0000 UTC m=+0.202239895 container init 905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 04:52:37 localhost podman[304279]: 2026-02-01 09:52:37.38761524 +0000 UTC m=+0.212194604 container start 905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:52:37 localhost neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3[304293]: [NOTICE] (304297) : New worker (304299) forked Feb 1 04:52:37 localhost neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3[304293]: [NOTICE] (304297) : Loading success. Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.433 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa in datapath 01cb494b-1310-460f-acbe-602aefea39c6 unbound from our chassis#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.436 158655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01cb494b-1310-460f-acbe-602aefea39c6#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.446 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[376e2b97-12ed-40cc-9805-f3798ee446c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.447 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01cb494b-11 in ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.449 303130 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01cb494b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.449 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0ca9c6-6b8b-4981-97b4-da4245e774fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.451 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[88712fa3-c102-490e-b6bc-04d3c2f66594]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.468 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4df313-be1f-461c-b5ba-1e582d1d6739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.478 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[82598059-0d9b-4301-8235-8888046ea1ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.501 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b368e4-b2c6-4eb9-a171-a924ff267b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.506 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[326e019d-09d8-4052-9992-483de85fa64a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost NetworkManager[5972]: [1769939557.5081] manager: (tap01cb494b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/18) Feb 1 04:52:37 localhost systemd-udevd[304223]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.536 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[c3827b55-1cc3-4801-8d67-34eab1e5e586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.539 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[91aa3261-a068-4745-8bfd-e16263dc866e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap01cb494b-10: link becomes ready Feb 1 04:52:37 localhost NetworkManager[5972]: [1769939557.5575] device (tap01cb494b-10): carrier: link connected Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.562 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9dd0d7-fe4d-4712-a0f6-ff605c625360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.578 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[741e624d-414d-4a70-ae7c-af5b90887f87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01cb494b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:32:98:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1164166, 'reachable_time': 44924, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304319, 'error': None, 'target': 'ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.593 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[41d40fef-2207-4cf3-a0ee-94fb99fd937f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:9873'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1164166, 'tstamp': 1164166}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304321, 'error': None, 'target': 'ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.610 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4050d4-7be4-4645-a22d-66f4c91dafbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01cb494b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:32:98:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1164166, 'reachable_time': 44924, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304322, 'error': None, 'target': 'ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.637 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[b0be199a-26d4-4996-927d-5c797e758c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.693 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[3f597552-4456-407b-b332-545f9e286f06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.694 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01cb494b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.695 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.695 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01cb494b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:37 localhost nova_compute[274317]: 2026-02-01 09:52:37.698 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:37 localhost kernel: device tap01cb494b-10 entered promiscuous mode Feb 1 04:52:37 localhost nova_compute[274317]: 2026-02-01 09:52:37.701 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.702 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01cb494b-10, col_values=(('external_ids', {'iface-id': '6efa26b8-94b4-4ffe-b212-c7bedef06410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:37 localhost nova_compute[274317]: 2026-02-01 09:52:37.704 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:37 localhost ovn_controller[152787]: 2026-02-01T09:52:37Z|00056|binding|INFO|Releasing lport 6efa26b8-94b4-4ffe-b212-c7bedef06410 from this chassis (sb_readonly=0) Feb 1 04:52:37 localhost nova_compute[274317]: 2026-02-01 09:52:37.714 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.716 158655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01cb494b-1310-460f-acbe-602aefea39c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01cb494b-1310-460f-acbe-602aefea39c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.717 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf7c2f9-2105-47e9-8b0a-85a20806e78a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.718 158655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: global Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: log /dev/log local0 debug Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: log-tag haproxy-metadata-proxy-01cb494b-1310-460f-acbe-602aefea39c6 Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: user root Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: group root Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: maxconn 1024 Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: pidfile /var/lib/neutron/external/pids/01cb494b-1310-460f-acbe-602aefea39c6.pid.haproxy Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: daemon Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: defaults Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: log global Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: mode http Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: option httplog Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: option dontlognull Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: option http-server-close Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: option forwardfor Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: retries 3 Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: timeout http-request 30s Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: timeout connect 30s Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: timeout client 32s Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: timeout server 32s Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: timeout http-keep-alive 30s Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: listen listener Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: bind 169.254.169.254:80 Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: server metadata /var/lib/neutron/metadata_proxy Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: http-request add-header X-OVN-Network-ID 01cb494b-1310-460f-acbe-602aefea39c6 Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 1 04:52:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:37.719 158655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6', 'env', 'PROCESS_TAG=haproxy-01cb494b-1310-460f-acbe-602aefea39c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01cb494b-1310-460f-acbe-602aefea39c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 1 04:52:37 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e96 e96: 6 total, 6 up, 6 in Feb 1 04:52:38 localhost nova_compute[274317]: 2026-02-01 09:52:38.070 274321 INFO nova.virt.libvirt.driver [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Feb 1 04:52:38 localhost nova_compute[274317]: 2026-02-01 09:52:38.077 274321 DEBUG nova.compute.manager [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:52:38 localhost nova_compute[274317]: 2026-02-01 09:52:38.097 274321 DEBUG nova.objects.instance [None req-7ebd8498-3f44-4651-9102-d6c4dae99d3c 1c20a58be3994701970a12462e33ab8c 840578c6ea7d45ab96b8ea958c57962b - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Feb 1 04:52:38 localhost podman[304369]: Feb 1 04:52:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 9.2 MiB/s rd, 16 MiB/s wr, 472 op/s Feb 1 04:52:38 localhost dnsmasq[303862]: exiting on receipt of SIGTERM Feb 1 04:52:38 localhost podman[304382]: 2026-02-01 09:52:38.161445772 +0000 UTC m=+0.061827028 container kill 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true) Feb 1 04:52:38 localhost systemd[1]: libpod-1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38.scope: Deactivated successfully. Feb 1 04:52:38 localhost podman[304369]: 2026-02-01 09:52:38.203378269 +0000 UTC m=+0.136288929 container create b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:52:38 localhost podman[304369]: 2026-02-01 09:52:38.108754279 +0000 UTC m=+0.041664959 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 1 04:52:38 localhost podman[304397]: 2026-02-01 09:52:38.228621736 +0000 UTC m=+0.056705669 container died 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:52:38 localhost systemd[1]: Started libpod-conmon-b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c.scope. Feb 1 04:52:38 localhost systemd[1]: Started libcrun container. Feb 1 04:52:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae23c46f93dd79275de77520a02b78359c7b354b0c2cd55b4d41f11cf0d08430/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:52:38 localhost podman[304397]: 2026-02-01 09:52:38.260160109 +0000 UTC m=+0.088244032 container cleanup 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:52:38 localhost systemd[1]: var-lib-containers-storage-overlay-44b6a4c8167cba5b5fbdfbf9820bb6cd4a6fbd5e72379076f4e21cd139706606-merged.mount: Deactivated successfully. Feb 1 04:52:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38-userdata-shm.mount: Deactivated successfully. Feb 1 04:52:38 localhost systemd[1]: libpod-conmon-1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38.scope: Deactivated successfully. Feb 1 04:52:38 localhost podman[304404]: 2026-02-01 09:52:38.293497468 +0000 UTC m=+0.108141802 container remove 1b5f4b7c595945d1b4fdef3bd6ed5ad2e17597e59d7c07eff640617e98913e38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a91bc36-a078-4e5e-bd8f-3f791a7ad269, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:52:38 localhost podman[304369]: 2026-02-01 09:52:38.318364933 +0000 UTC m=+0.251275583 container init b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 1 04:52:38 localhost systemd[1]: tmp-crun.PVuFqE.mount: Deactivated successfully. Feb 1 04:52:38 localhost systemd[1]: run-netns-qdhcp\x2d1a91bc36\x2da078\x2d4e5e\x2dbd8f\x2d3f791a7ad269.mount: Deactivated successfully. Feb 1 04:52:38 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:38.339 259225 INFO neutron.agent.dhcp.agent [None req-eb142ce4-748c-4425-8127-dae17b9bce2e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:52:38 localhost podman[304369]: 2026-02-01 09:52:38.340145362 +0000 UTC m=+0.273056012 container start b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 04:52:38 localhost neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6[304425]: [NOTICE] (304433) : New worker (304435) forked Feb 1 04:52:38 localhost neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6[304425]: [NOTICE] (304433) : Loading success. Feb 1 04:52:38 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:38.514 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:52:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e97 e97: 6 total, 6 up, 6 in Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.108 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.311 274321 DEBUG nova.compute.manager [req-de846797-8fb6-4cb8-8194-1db4a623ccb8 req-69e52d04-804d-4ce3-bb97-16a7251e078e 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Received event network-vif-plugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.312 274321 DEBUG oslo_concurrency.lockutils [req-de846797-8fb6-4cb8-8194-1db4a623ccb8 req-69e52d04-804d-4ce3-bb97-16a7251e078e 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.312 274321 DEBUG oslo_concurrency.lockutils [req-de846797-8fb6-4cb8-8194-1db4a623ccb8 req-69e52d04-804d-4ce3-bb97-16a7251e078e 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.313 274321 DEBUG oslo_concurrency.lockutils [req-de846797-8fb6-4cb8-8194-1db4a623ccb8 req-69e52d04-804d-4ce3-bb97-16a7251e078e 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.313 274321 DEBUG nova.compute.manager [req-de846797-8fb6-4cb8-8194-1db4a623ccb8 req-69e52d04-804d-4ce3-bb97-16a7251e078e 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] No waiting events found dispatching network-vif-plugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.313 274321 WARNING nova.compute.manager [req-de846797-8fb6-4cb8-8194-1db4a623ccb8 req-69e52d04-804d-4ce3-bb97-16a7251e078e 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Received unexpected event network-vif-plugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa for instance with vm_state active and task_state None.#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.892 274321 DEBUG oslo_concurrency.lockutils [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Acquiring lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.892 274321 DEBUG oslo_concurrency.lockutils [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.893 274321 DEBUG oslo_concurrency.lockutils [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Acquiring lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.893 274321 DEBUG oslo_concurrency.lockutils [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.893 274321 DEBUG oslo_concurrency.lockutils [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.895 274321 INFO nova.compute.manager [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Terminating instance#033[00m Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.896 274321 DEBUG nova.compute.manager [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Feb 1 04:52:39 localhost kernel: device tap96aeb3a2-ba left promiscuous mode Feb 1 04:52:39 localhost NetworkManager[5972]: [1769939559.9794] device (tap96aeb3a2-ba): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.987 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:39 localhost ovn_controller[152787]: 2026-02-01T09:52:39Z|00057|binding|INFO|Releasing lport 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa from this chassis (sb_readonly=0) Feb 1 04:52:39 localhost ovn_controller[152787]: 2026-02-01T09:52:39Z|00058|binding|INFO|Setting lport 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa down in Southbound Feb 1 04:52:39 localhost ovn_controller[152787]: 2026-02-01T09:52:39Z|00059|binding|INFO|Releasing lport d16170e5-2dd1-4d5e-a380-5344cdba0aa7 from this chassis (sb_readonly=0) Feb 1 04:52:39 localhost ovn_controller[152787]: 2026-02-01T09:52:39Z|00060|binding|INFO|Setting lport d16170e5-2dd1-4d5e-a380-5344cdba0aa7 down in Southbound Feb 1 04:52:39 localhost ovn_controller[152787]: 2026-02-01T09:52:39Z|00061|binding|INFO|Removing iface tap96aeb3a2-ba ovn-installed in OVS Feb 1 04:52:39 localhost nova_compute[274317]: 2026-02-01 09:52:39.990 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost ovn_controller[152787]: 2026-02-01T09:52:39Z|00062|binding|INFO|Releasing lport 6efa26b8-94b4-4ffe-b212-c7bedef06410 from this chassis (sb_readonly=0) Feb 1 04:52:40 localhost ovn_controller[152787]: 2026-02-01T09:52:39Z|00063|binding|INFO|Releasing lport 8e91955e-c3fb-4309-8605-7dae9ca4cd95 from this chassis (sb_readonly=0) Feb 1 04:52:40 localhost ovn_controller[152787]: 2026-02-01T09:52:40Z|00064|ovn_bfd|INFO|Disabled BFD on interface ovn-2186fb-0 Feb 1 04:52:40 localhost ovn_controller[152787]: 2026-02-01T09:52:40Z|00065|ovn_bfd|INFO|Disabled BFD on interface ovn-e1cc33-0 Feb 1 04:52:40 localhost ovn_controller[152787]: 2026-02-01T09:52:40Z|00066|ovn_bfd|INFO|Disabled BFD on interface ovn-45aa31-0 Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.006 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:39.999 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:2d:9c 19.80.0.33'], port_security=['fa:16:3e:db:2d:9c 19.80.0.33'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-491001553', 'neutron:cidrs': '19.80.0.33/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9c0246b2-3507-4017-b8dd-01251187a6c3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-491001553', 'neutron:project_id': 'ebe5e345d591408fa955b2e811bfaffb', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f05aaf36-904c-44ae-a203-34e61744db7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=722e7a10-7816-489f-9516-bc350daf9fce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=d16170e5-2dd1-4d5e-a380-5344cdba0aa7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.002 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:4d:83 10.100.0.11'], port_security=['fa:16:3e:6e:4d:83 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-377096059', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5aefea54-941a-48bf-ad9e-7f13fdfdb4ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01cb494b-1310-460f-acbe-602aefea39c6', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-377096059', 'neutron:project_id': 'ebe5e345d591408fa955b2e811bfaffb', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'f05aaf36-904c-44ae-a203-34e61744db7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae7d4c2f-1d19-4933-99fa-b8aa62feb38e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.004 158655 INFO neutron.agent.ovn.metadata.agent [-] Port d16170e5-2dd1-4d5e-a380-5344cdba0aa7 in datapath 9c0246b2-3507-4017-b8dd-01251187a6c3 unbound from our chassis#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.008 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2f157b64-12ad-48f6-bd1f-788194f131e8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.008 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9c0246b2-3507-4017-b8dd-01251187a6c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.009 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3b292a-8ebb-4cf5-9937-bfb38adf361d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.010 158655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3 namespace which is not needed anymore#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.012 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Deactivated successfully. Feb 1 04:52:40 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Consumed 1.490s CPU time. Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.046 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost ovn_controller[152787]: 2026-02-01T09:52:40Z|00067|binding|INFO|Releasing lport 6efa26b8-94b4-4ffe-b212-c7bedef06410 from this chassis (sb_readonly=0) Feb 1 04:52:40 localhost ovn_controller[152787]: 2026-02-01T09:52:40Z|00068|binding|INFO|Releasing lport 8e91955e-c3fb-4309-8605-7dae9ca4cd95 from this chassis (sb_readonly=0) Feb 1 04:52:40 localhost systemd-machined[202466]: Machine qemu-1-instance-00000007 terminated. Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.052 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.132 274321 INFO nova.virt.libvirt.driver [-] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Instance destroyed successfully.#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.133 274321 DEBUG nova.objects.instance [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Lazy-loading 'resources' on Instance uuid 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:52:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v109: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 8.0 MiB/s rd, 7.9 MiB/s wr, 256 op/s Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.150 274321 DEBUG nova.virt.libvirt.vif [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-01T09:52:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-328365138',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005604215.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-328365138',id=7,image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-01T09:52:21Z,launched_on='np0005604213.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005604215.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='ebe5e345d591408fa955b2e811bfaffb',ramdisk_id='',reservation_id='r-hz7zc7vw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1924784790',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1924784790-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2026-02-01T09:52:38Z,user_data=None,user_id='336655b6a22d4371b0a5cd24b959dc9a',uuid=5aefea54-941a-48bf-ad9e-7f13fdfdb4ed,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "address": "fa:16:3e:6e:4d:83", "network": {"id": "01cb494b-1310-460f-acbe-602aefea39c6", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1791362587-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ebe5e345d591408fa955b2e811bfaffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96aeb3a2-ba", "ovs_interfaceid": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.150 274321 DEBUG nova.network.os_vif_util [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Converting VIF {"id": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "address": "fa:16:3e:6e:4d:83", "network": {"id": "01cb494b-1310-460f-acbe-602aefea39c6", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1791362587-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ebe5e345d591408fa955b2e811bfaffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96aeb3a2-ba", "ovs_interfaceid": "96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.151 274321 DEBUG nova.network.os_vif_util [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:4d:83,bridge_name='br-int',has_traffic_filtering=True,id=96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa,network=Network(01cb494b-1310-460f-acbe-602aefea39c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap96aeb3a2-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.152 274321 DEBUG os_vif [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:4d:83,bridge_name='br-int',has_traffic_filtering=True,id=96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa,network=Network(01cb494b-1310-460f-acbe-602aefea39c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap96aeb3a2-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.154 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.155 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96aeb3a2-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.157 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.158 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.161 274321 INFO os_vif [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:4d:83,bridge_name='br-int',has_traffic_filtering=True,id=96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa,network=Network(01cb494b-1310-460f-acbe-602aefea39c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap96aeb3a2-ba')#033[00m Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3[304293]: [NOTICE] (304297) : haproxy version is 2.8.14-c23fe91 Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3[304293]: [NOTICE] (304297) : path to executable is /usr/sbin/haproxy Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3[304293]: [WARNING] (304297) : Exiting Master process... Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3[304293]: [WARNING] (304297) : Exiting Master process... Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3[304293]: [ALERT] (304297) : Current worker (304299) exited with code 143 (Terminated) Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3[304293]: [WARNING] (304297) : All workers exited. Exiting... (0) Feb 1 04:52:40 localhost systemd[1]: libpod-905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb.scope: Deactivated successfully. Feb 1 04:52:40 localhost podman[304469]: 2026-02-01 09:52:40.203607979 +0000 UTC m=+0.081603486 container died 905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:52:40 localhost podman[304469]: 2026-02-01 09:52:40.244551424 +0000 UTC m=+0.122546891 container cleanup 905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 1 04:52:40 localhost podman[304508]: 2026-02-01 09:52:40.282491737 +0000 UTC m=+0.066441102 container cleanup 905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:52:40 localhost systemd[1]: libpod-conmon-905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb.scope: Deactivated successfully. Feb 1 04:52:40 localhost podman[304525]: 2026-02-01 09:52:40.356771632 +0000 UTC m=+0.090951765 container remove 905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.362 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1c2b3e-8c0f-46b9-a708-6c9e22bc782b]: (4, ('Sun Feb 1 09:52:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3 (905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb)\n905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb\nSun Feb 1 09:52:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3 (905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb)\n905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.363 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[fb899725-e239-4edd-b40b-881d60ced2b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.364 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c0246b2-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.366 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost kernel: device tap9c0246b2-30 left promiscuous mode Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.376 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.380 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[139ae1d7-6502-40ff-8b99-4f1127d2edf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.390 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[7f52f8d9-76bb-40e8-a621-06af7cd46539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.391 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[1406f483-0344-4366-8c6d-c2cb10d9090c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.408 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[8233a994-7653-4bcb-8e21-396d817e2c4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1164064, 'reachable_time': 41213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304543, 'error': None, 'target': 'ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.418 158836 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9c0246b2-3507-4017-b8dd-01251187a6c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.419 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[6d96ccd3-a7b5-4b46-b667-8a7bb80d469c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.419 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa in datapath 01cb494b-1310-460f-acbe-602aefea39c6 unbound from our chassis#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.421 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01cb494b-1310-460f-acbe-602aefea39c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.422 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[d72da791-354a-4918-b555-262f9aa6a035]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.422 158655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6 namespace which is not needed anymore#033[00m Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6[304425]: [NOTICE] (304433) : haproxy version is 2.8.14-c23fe91 Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6[304425]: [NOTICE] (304433) : path to executable is /usr/sbin/haproxy Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6[304425]: [WARNING] (304433) : Exiting Master process... Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6[304425]: [ALERT] (304433) : Current worker (304435) exited with code 143 (Terminated) Feb 1 04:52:40 localhost neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6[304425]: [WARNING] (304433) : All workers exited. Exiting... (0) Feb 1 04:52:40 localhost systemd[1]: libpod-b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c.scope: Deactivated successfully. Feb 1 04:52:40 localhost podman[304562]: 2026-02-01 09:52:40.619999367 +0000 UTC m=+0.076367091 container died b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:52:40 localhost podman[304562]: 2026-02-01 09:52:40.667171758 +0000 UTC m=+0.123539452 container cleanup b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:52:40 localhost podman[304576]: 2026-02-01 09:52:40.687223073 +0000 UTC m=+0.065879804 container cleanup b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:52:40 localhost systemd[1]: libpod-conmon-b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c.scope: Deactivated successfully. Feb 1 04:52:40 localhost podman[304591]: 2026-02-01 09:52:40.766750112 +0000 UTC m=+0.078261520 container remove b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.771 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[d21b1b89-d9eb-484d-ab43-f1c67eda1b96]: (4, ('Sun Feb 1 09:52:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6 (b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c)\nb9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c\nSun Feb 1 09:52:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6 (b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c)\nb9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.773 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[3a15d205-ff39-43b7-b7e2-cf05220b0e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.774 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01cb494b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.776 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost kernel: device tap01cb494b-10 left promiscuous mode Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.783 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.786 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[054674a1-652a-4cb2-84e8-05c3fd7e3d2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.800 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[728bfe85-1294-4326-8ebc-906e073e0df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.802 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[c21b413b-3622-4c0b-8a65-ccd500d97c26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.819 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d57a2b-0763-4ab7-b3bd-119fdbbf1a7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1164160, 'reachable_time': 27069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304608, 'error': None, 'target': 'ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.820 158836 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01cb494b-1310-460f-acbe-602aefea39c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 1 04:52:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:40.821 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[6d42a85c-dcab-4b23-a14d-5a89d8e54f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.931 274321 INFO nova.virt.libvirt.driver [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Deleting instance files /var/lib/nova/instances/5aefea54-941a-48bf-ad9e-7f13fdfdb4ed_del#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.934 274321 INFO nova.virt.libvirt.driver [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Deletion of /var/lib/nova/instances/5aefea54-941a-48bf-ad9e-7f13fdfdb4ed_del complete#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.992 274321 DEBUG nova.virt.libvirt.host [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.993 274321 INFO nova.virt.libvirt.host [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] UEFI support detected#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.994 274321 INFO nova.compute.manager [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.995 274321 DEBUG oslo.service.loopingcall [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.996 274321 DEBUG nova.compute.manager [-] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Feb 1 04:52:40 localhost nova_compute[274317]: 2026-02-01 09:52:40.996 274321 DEBUG nova.network.neutron [-] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Feb 1 04:52:41 localhost systemd[1]: var-lib-containers-storage-overlay-ae23c46f93dd79275de77520a02b78359c7b354b0c2cd55b4d41f11cf0d08430-merged.mount: Deactivated successfully. Feb 1 04:52:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9c3ff5d69e08cd7b2074869c3a668f42c533f574c855ca03e98330e4906679c-userdata-shm.mount: Deactivated successfully. Feb 1 04:52:41 localhost systemd[1]: run-netns-ovnmeta\x2d01cb494b\x2d1310\x2d460f\x2dacbe\x2d602aefea39c6.mount: Deactivated successfully. Feb 1 04:52:41 localhost systemd[1]: var-lib-containers-storage-overlay-b99c826e4ffdecd670f48080610e81d3245f462deda0b0580ae2ad15e879a9a8-merged.mount: Deactivated successfully. Feb 1 04:52:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-905eac83e0bd4743dba3e7c52cadf071fcd55b92c98c70803043438547792bbb-userdata-shm.mount: Deactivated successfully. Feb 1 04:52:41 localhost systemd[1]: run-netns-ovnmeta\x2d9c0246b2\x2d3507\x2d4017\x2db8dd\x2d01251187a6c3.mount: Deactivated successfully. Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.256 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.351 274321 DEBUG nova.compute.manager [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Received event network-vif-plugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.352 274321 DEBUG oslo_concurrency.lockutils [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.352 274321 DEBUG oslo_concurrency.lockutils [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.353 274321 DEBUG oslo_concurrency.lockutils [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.353 274321 DEBUG nova.compute.manager [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] No waiting events found dispatching network-vif-plugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.353 274321 WARNING nova.compute.manager [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Received unexpected event network-vif-plugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa for instance with vm_state active and task_state deleting.#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.353 274321 DEBUG nova.compute.manager [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Received event network-vif-unplugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.354 274321 DEBUG oslo_concurrency.lockutils [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.354 274321 DEBUG oslo_concurrency.lockutils [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.354 274321 DEBUG oslo_concurrency.lockutils [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.355 274321 DEBUG nova.compute.manager [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] No waiting events found dispatching network-vif-unplugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.355 274321 DEBUG nova.compute.manager [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Received event network-vif-unplugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.355 274321 DEBUG nova.compute.manager [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Received event network-vif-plugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.356 274321 DEBUG oslo_concurrency.lockutils [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.356 274321 DEBUG oslo_concurrency.lockutils [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.356 274321 DEBUG oslo_concurrency.lockutils [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.357 274321 DEBUG nova.compute.manager [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] No waiting events found dispatching network-vif-plugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:52:41 localhost nova_compute[274317]: 2026-02-01 09:52:41.357 274321 WARNING nova.compute.manager [req-1ef95c59-cca0-475a-b8eb-6da6125e9b91 req-d00d56f1-388f-4854-a762-91e8ef9504ba 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Received unexpected event network-vif-plugged-96aeb3a2-ba77-4c7e-afdb-ef57beaf09fa for instance with vm_state active and task_state deleting.#033[00m Feb 1 04:52:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e98 e98: 6 total, 6 up, 6 in Feb 1 04:52:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:41.771 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:41.771 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:41.771 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 8.0 MiB/s rd, 7.9 MiB/s wr, 256 op/s Feb 1 04:52:42 localhost nova_compute[274317]: 2026-02-01 09:52:42.392 274321 DEBUG nova.network.neutron [-] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 1 04:52:42 localhost nova_compute[274317]: 2026-02-01 09:52:42.423 274321 INFO nova.compute.manager [-] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Took 1.43 seconds to deallocate network for instance.#033[00m Feb 1 04:52:42 localhost nova_compute[274317]: 2026-02-01 09:52:42.483 274321 DEBUG oslo_concurrency.lockutils [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:42 localhost nova_compute[274317]: 2026-02-01 09:52:42.484 274321 DEBUG oslo_concurrency.lockutils [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:42 localhost nova_compute[274317]: 2026-02-01 09:52:42.487 274321 DEBUG oslo_concurrency.lockutils [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:42 localhost nova_compute[274317]: 2026-02-01 09:52:42.527 274321 INFO nova.scheduler.client.report [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Deleted allocations for instance 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed#033[00m Feb 1 04:52:42 localhost nova_compute[274317]: 2026-02-01 09:52:42.599 274321 DEBUG oslo_concurrency.lockutils [None req-37ef1d53-7a9b-45f3-b855-6e641547f320 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Lock "5aefea54-941a-48bf-ad9e-7f13fdfdb4ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:42 localhost neutron_sriov_agent[252054]: 2026-02-01 09:52:42.623 2 INFO neutron.agent.securitygroups_rpc [None req-c376665f-557d-4fc0-a2ce-3b9dbb425e99 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Security group member updated ['3c3daae5-f0f3-42a8-b893-8c534dcb0055']#033[00m Feb 1 04:52:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:52:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:52:43 localhost systemd[1]: Stopping User Manager for UID 42436... Feb 1 04:52:43 localhost systemd[304043]: Activating special unit Exit the Session... Feb 1 04:52:43 localhost systemd[304043]: Stopped target Main User Target. Feb 1 04:52:43 localhost systemd[304043]: Stopped target Basic System. Feb 1 04:52:43 localhost systemd[304043]: Stopped target Paths. Feb 1 04:52:43 localhost systemd[304043]: Stopped target Sockets. Feb 1 04:52:43 localhost systemd[304043]: Stopped target Timers. Feb 1 04:52:43 localhost systemd[304043]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 1 04:52:43 localhost systemd[304043]: Stopped Daily Cleanup of User's Temporary Directories. Feb 1 04:52:43 localhost systemd[304043]: Closed D-Bus User Message Bus Socket. Feb 1 04:52:43 localhost systemd[304043]: Stopped Create User's Volatile Files and Directories. Feb 1 04:52:43 localhost systemd[304043]: Removed slice User Application Slice. Feb 1 04:52:43 localhost systemd[304043]: Reached target Shutdown. Feb 1 04:52:43 localhost systemd[304043]: Finished Exit the Session. Feb 1 04:52:43 localhost systemd[304043]: Reached target Exit the Session. Feb 1 04:52:43 localhost systemd[1]: user@42436.service: Deactivated successfully. Feb 1 04:52:43 localhost systemd[1]: Stopped User Manager for UID 42436. Feb 1 04:52:43 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Feb 1 04:52:43 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Feb 1 04:52:43 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Feb 1 04:52:43 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Feb 1 04:52:43 localhost systemd[1]: Removed slice User Slice of UID 42436. Feb 1 04:52:43 localhost podman[304609]: 2026-02-01 09:52:43.622740158 +0000 UTC m=+0.082462781 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:52:43 localhost podman[304610]: 2026-02-01 09:52:43.685316038 +0000 UTC m=+0.139982045 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:52:43 localhost podman[304609]: 2026-02-01 09:52:43.688808087 +0000 UTC m=+0.148530701 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:52:43 localhost podman[304610]: 2026-02-01 09:52:43.697731345 +0000 UTC m=+0.152397352 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:52:43 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:52:43 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:52:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v112: 177 pgs: 177 active+clean; 224 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 98 KiB/s rd, 29 KiB/s wr, 140 op/s Feb 1 04:52:44 localhost neutron_sriov_agent[252054]: 2026-02-01 09:52:44.876 2 INFO neutron.agent.securitygroups_rpc [None req-651e6fa7-c546-4929-9315-764ba9e33bc3 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Security group member updated ['f05aaf36-904c-44ae-a203-34e61744db7d']#033[00m Feb 1 04:52:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:45 localhost nova_compute[274317]: 2026-02-01 09:52:45.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:52:45 localhost nova_compute[274317]: 2026-02-01 09:52:45.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:52:45 localhost nova_compute[274317]: 2026-02-01 09:52:45.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:52:45 localhost podman[304673]: 2026-02-01 09:52:45.108937495 +0000 UTC m=+0.033278059 container kill 23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c0246b2-3507-4017-b8dd-01251187a6c3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 1 04:52:45 localhost dnsmasq[303619]: read /var/lib/neutron/dhcp/9c0246b2-3507-4017-b8dd-01251187a6c3/addn_hosts - 0 addresses Feb 1 04:52:45 localhost dnsmasq-dhcp[303619]: read /var/lib/neutron/dhcp/9c0246b2-3507-4017-b8dd-01251187a6c3/host Feb 1 04:52:45 localhost dnsmasq-dhcp[303619]: read /var/lib/neutron/dhcp/9c0246b2-3507-4017-b8dd-01251187a6c3/opts Feb 1 04:52:45 localhost nova_compute[274317]: 2026-02-01 09:52:45.115 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:52:45 localhost nova_compute[274317]: 2026-02-01 09:52:45.157 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:46 localhost dnsmasq[303619]: exiting on receipt of SIGTERM Feb 1 04:52:46 localhost podman[304710]: 2026-02-01 09:52:46.064392487 +0000 UTC m=+0.065794792 container kill 23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c0246b2-3507-4017-b8dd-01251187a6c3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:52:46 localhost systemd[1]: libpod-23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0.scope: Deactivated successfully. Feb 1 04:52:46 localhost podman[304726]: 2026-02-01 09:52:46.128369262 +0000 UTC m=+0.046056228 container died 23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c0246b2-3507-4017-b8dd-01251187a6c3, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:52:46 localhost systemd[1]: tmp-crun.oCeIkm.mount: Deactivated successfully. Feb 1 04:52:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 224 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 23 KiB/s wr, 109 op/s Feb 1 04:52:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0-userdata-shm.mount: Deactivated successfully. Feb 1 04:52:46 localhost systemd[1]: var-lib-containers-storage-overlay-53daeb5ecd744789a19f463b75866691ebb76af9c46948b934b77d0920c93713-merged.mount: Deactivated successfully. Feb 1 04:52:46 localhost podman[304726]: 2026-02-01 09:52:46.176690518 +0000 UTC m=+0.094377464 container remove 23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c0246b2-3507-4017-b8dd-01251187a6c3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:52:46 localhost ovn_controller[152787]: 2026-02-01T09:52:46Z|00069|binding|INFO|Releasing lport c8e9dce8-3cef-4d4b-8d3c-5d13d0890663 from this chassis (sb_readonly=0) Feb 1 04:52:46 localhost ovn_controller[152787]: 2026-02-01T09:52:46Z|00070|binding|INFO|Setting lport c8e9dce8-3cef-4d4b-8d3c-5d13d0890663 down in Southbound Feb 1 04:52:46 localhost kernel: device tapc8e9dce8-3c left promiscuous mode Feb 1 04:52:46 localhost nova_compute[274317]: 2026-02-01 09:52:46.185 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:46 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:46.202 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.3/24', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-9c0246b2-3507-4017-b8dd-01251187a6c3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9c0246b2-3507-4017-b8dd-01251187a6c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebe5e345d591408fa955b2e811bfaffb', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=722e7a10-7816-489f-9516-bc350daf9fce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=c8e9dce8-3cef-4d4b-8d3c-5d13d0890663) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:46 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:46.203 158655 INFO neutron.agent.ovn.metadata.agent [-] Port c8e9dce8-3cef-4d4b-8d3c-5d13d0890663 in datapath 9c0246b2-3507-4017-b8dd-01251187a6c3 unbound from our chassis#033[00m Feb 1 04:52:46 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:46.206 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9c0246b2-3507-4017-b8dd-01251187a6c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:52:46 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:46.207 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1acf6a-cc22-4993-afaa-c6d6faa318fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:46 localhost nova_compute[274317]: 2026-02-01 09:52:46.211 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:46 localhost systemd[1]: libpod-conmon-23f4520b0a973c50c64041e4f473fe8222e37918b934ba053a5f01d9377a58c0.scope: Deactivated successfully. Feb 1 04:52:46 localhost nova_compute[274317]: 2026-02-01 09:52:46.258 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e99 e99: 6 total, 6 up, 6 in Feb 1 04:52:46 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:46.464 259225 INFO neutron.agent.dhcp.agent [None req-5b53c1c1-32b1-4ba9-9916-e803cb5df1f1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:52:46 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:46.465 259225 INFO neutron.agent.dhcp.agent [None req-5b53c1c1-32b1-4ba9-9916-e803cb5df1f1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:52:46 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:46.529 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:52:46 localhost neutron_sriov_agent[252054]: 2026-02-01 09:52:46.683 2 INFO neutron.agent.securitygroups_rpc [None req-2d42c471-7b15-4400-a993-fcf3849484f7 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Security group member updated ['3c3daae5-f0f3-42a8-b893-8c534dcb0055']#033[00m Feb 1 04:52:46 localhost nova_compute[274317]: 2026-02-01 09:52:46.742 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:47 localhost nova_compute[274317]: 2026-02-01 09:52:47.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:52:47 localhost systemd[1]: run-netns-qdhcp\x2d9c0246b2\x2d3507\x2d4017\x2db8dd\x2d01251187a6c3.mount: Deactivated successfully. Feb 1 04:52:47 localhost neutron_sriov_agent[252054]: 2026-02-01 09:52:47.421 2 INFO neutron.agent.securitygroups_rpc [None req-0fc1e61c-d2d8-4451-b527-5803b4fad28d 336655b6a22d4371b0a5cd24b959dc9a ebe5e345d591408fa955b2e811bfaffb - - default default] Security group member updated ['f05aaf36-904c-44ae-a203-34e61744db7d']#033[00m Feb 1 04:52:48 localhost nova_compute[274317]: 2026-02-01 09:52:48.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:52:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 231 op/s Feb 1 04:52:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e100 e100: 6 total, 6 up, 6 in Feb 1 04:52:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:52:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:52:48 localhost podman[304754]: 2026-02-01 09:52:48.872980403 +0000 UTC m=+0.082955806 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, release=1769056855, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, version=9.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 1 04:52:48 localhost podman[304754]: 2026-02-01 09:52:48.889120446 +0000 UTC m=+0.099095859 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:52:48 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:52:48 localhost podman[304755]: 2026-02-01 09:52:48.976390836 +0000 UTC m=+0.184451250 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Feb 1 04:52:49 localhost podman[304755]: 2026-02-01 09:52:49.012615646 +0000 UTC m=+0.220676100 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:52:49 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:52:49 localhost nova_compute[274317]: 2026-02-01 09:52:49.097 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:52:49 localhost nova_compute[274317]: 2026-02-01 09:52:49.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:52:49 localhost nova_compute[274317]: 2026-02-01 09:52:49.877 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:49 localhost nova_compute[274317]: 2026-02-01 09:52:49.878 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:49 localhost nova_compute[274317]: 2026-02-01 09:52:49.898 274321 DEBUG nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Feb 1 04:52:49 localhost nova_compute[274317]: 2026-02-01 09:52:49.984 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:49 localhost nova_compute[274317]: 2026-02-01 09:52:49.985 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:49 localhost nova_compute[274317]: 2026-02-01 09:52:49.990 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Feb 1 04:52:49 localhost nova_compute[274317]: 2026-02-01 09:52:49.991 274321 INFO nova.compute.claims [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Claim successful on node np0005604215.localdomain#033[00m Feb 1 04:52:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.097 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.110 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.111 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.111 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.138 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 231 op/s Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.160 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:50.472 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:50.474 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.499 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:52:50 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/641399154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.553 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.559 274321 DEBUG nova.compute.provider_tree [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.581 274321 DEBUG nova.scheduler.client.report [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.602 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.603 274321 DEBUG nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.607 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.608 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.608 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.608 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.653 274321 DEBUG nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.654 274321 DEBUG nova.network.neutron [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.667 274321 INFO nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.684 274321 DEBUG nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.773 274321 DEBUG nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.777 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.778 274321 INFO nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Creating image(s)#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.820 274321 DEBUG nova.storage.rbd_utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] rbd image aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.863 274321 DEBUG nova.storage.rbd_utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] rbd image aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.903 274321 DEBUG nova.storage.rbd_utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] rbd image aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.911 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Acquiring lock "f978c6f71b922ff24c45ca010751fdcbed665c95" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.912 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "f978c6f71b922ff24c45ca010751fdcbed665c95" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:50 localhost nova_compute[274317]: 2026-02-01 09:52:50.967 274321 DEBUG nova.virt.libvirt.imagebackend [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Image locations are: [{'url': 'rbd://33fac0b9-80c7-560f-918a-c92d3021ca1e/images/a223c2d3-3df7-4d82-921c-31ace200d43c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33fac0b9-80c7-560f-918a-c92d3021ca1e/images/a223c2d3-3df7-4d82-921c-31ace200d43c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Feb 1 04:52:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:52:51 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1672829836' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.051 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.137 274321 WARNING oslo_policy.policy [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.138 274321 WARNING oslo_policy.policy [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.144 274321 DEBUG nova.policy [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.261 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.290 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.292 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11745MB free_disk=41.70050811767578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.293 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.293 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.355 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Instance aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.355 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.356 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.418 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:52:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:52:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:52:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:52:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:52:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:52:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:52:51 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/750954759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:52:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:52:51 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2325372919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.853 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.872 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.881 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.915 274321 ERROR nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] [req-9d1e416f-0fea-4fd5-b5be-e8ecaf324aa0] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID d5eeed9a-e4d0-4244-8d4e-39e5c8263590. Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict ", "code": "placement.concurrent_update", "request_id": "req-9d1e416f-0fea-4fd5-b5be-e8ecaf324aa0"}]}#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.931 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95.part --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.932 274321 DEBUG nova.virt.images [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] a223c2d3-3df7-4d82-921c-31ace200d43c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.934 274321 DEBUG nova.privsep.utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.935 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95.part /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.951 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.972 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.973 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:52:51 localhost nova_compute[274317]: 2026-02-01 09:52:51.991 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.030 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.085 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.107 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95.part /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95.converted" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.112 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 121 op/s Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.186 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95.converted --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.188 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "f978c6f71b922ff24c45ca010751fdcbed665c95" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 1.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.221 274321 DEBUG nova.storage.rbd_utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] rbd image aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.226 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95 aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:52:52 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2846492519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.553 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.559 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.617 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updated inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.617 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.618 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.643 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.643 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.724 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f978c6f71b922ff24c45ca010751fdcbed665c95 aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.831 274321 DEBUG nova.storage.rbd_utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] resizing rbd image aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.980 274321 DEBUG nova.objects.instance [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lazy-loading 'migration_context' on Instance uuid aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.998 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Feb 1 04:52:52 localhost nova_compute[274317]: 2026-02-01 09:52:52.999 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Ensure instance console log exists: /var/lib/nova/instances/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:52.999 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.000 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.000 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:53.224 259225 INFO neutron.agent.linux.ip_lib [None req-32273a97-523d-4b8f-b365-237d9402098e - - - - - -] Device tap7ad39b92-32 cannot be used as it has no MAC address#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.227 274321 DEBUG nova.network.neutron [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Successfully updated port: 3c861704-c594-42f8-a5b3-a274ec84650f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.244 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Acquiring lock "refresh_cache-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.244 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Acquired lock "refresh_cache-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.245 274321 DEBUG nova.network.neutron [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.246 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:53 localhost kernel: device tap7ad39b92-32 entered promiscuous mode Feb 1 04:52:53 localhost ovn_controller[152787]: 2026-02-01T09:52:53Z|00071|binding|INFO|Claiming lport 7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f for this chassis. Feb 1 04:52:53 localhost ovn_controller[152787]: 2026-02-01T09:52:53Z|00072|binding|INFO|7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f: Claiming unknown Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.251 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:53 localhost NetworkManager[5972]: [1769939573.2524] manager: (tap7ad39b92-32): new Generic device (/org/freedesktop/NetworkManager/Devices/19) Feb 1 04:52:53 localhost systemd-udevd[305072]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:52:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:53.260 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-fdca6946-14e8-4692-9d79-41002e703846', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdca6946-14e8-4692-9d79-41002e703846', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41697a815dfa4c5aaae37b529f6303e1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=013c5d80-ec0c-4f6b-91c1-a2283198de95, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:53.262 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f in datapath fdca6946-14e8-4692-9d79-41002e703846 bound to our chassis#033[00m Feb 1 04:52:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:53.264 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fdca6946-14e8-4692-9d79-41002e703846 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:52:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:53.265 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[7be5b6a9-dae6-4f93-a6cc-b187122fd5f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:53 localhost ovn_controller[152787]: 2026-02-01T09:52:53Z|00073|binding|INFO|Setting lport 7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f ovn-installed in OVS Feb 1 04:52:53 localhost ovn_controller[152787]: 2026-02-01T09:52:53Z|00074|binding|INFO|Setting lport 7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f up in Southbound Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.271 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:53 localhost journal[224955]: ethtool ioctl error on tap7ad39b92-32: No such device Feb 1 04:52:53 localhost journal[224955]: ethtool ioctl error on tap7ad39b92-32: No such device Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.288 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:53 localhost journal[224955]: ethtool ioctl error on tap7ad39b92-32: No such device Feb 1 04:52:53 localhost journal[224955]: ethtool ioctl error on tap7ad39b92-32: No such device Feb 1 04:52:53 localhost journal[224955]: ethtool ioctl error on tap7ad39b92-32: No such device Feb 1 04:52:53 localhost journal[224955]: ethtool ioctl error on tap7ad39b92-32: No such device Feb 1 04:52:53 localhost journal[224955]: ethtool ioctl error on tap7ad39b92-32: No such device Feb 1 04:52:53 localhost journal[224955]: ethtool ioctl error on tap7ad39b92-32: No such device Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.304 274321 DEBUG nova.compute.manager [req-19a75de5-1935-487b-96d7-ffbfaccf042b req-785ea1dd-a71c-4232-b521-7c041566ee25 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-changed-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.304 274321 DEBUG nova.compute.manager [req-19a75de5-1935-487b-96d7-ffbfaccf042b req-785ea1dd-a71c-4232-b521-7c041566ee25 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Refreshing instance network info cache due to event network-changed-3c861704-c594-42f8-a5b3-a274ec84650f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.305 274321 DEBUG oslo_concurrency.lockutils [req-19a75de5-1935-487b-96d7-ffbfaccf042b req-785ea1dd-a71c-4232-b521-7c041566ee25 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "refresh_cache-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.314 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.336 274321 DEBUG nova.network.neutron [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.342 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.634 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:52:53 localhost nova_compute[274317]: 2026-02-01 09:52:53.636 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:52:54 localhost podman[305143]: Feb 1 04:52:54 localhost podman[305143]: 2026-02-01 09:52:54.10627568 +0000 UTC m=+0.081447490 container create a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.111 274321 DEBUG nova.network.neutron [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Updating instance_info_cache with network_info: [{"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.128 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Releasing lock "refresh_cache-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.128 274321 DEBUG nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Instance network_info: |[{"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.129 274321 DEBUG oslo_concurrency.lockutils [req-19a75de5-1935-487b-96d7-ffbfaccf042b req-785ea1dd-a71c-4232-b521-7c041566ee25 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquired lock "refresh_cache-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.129 274321 DEBUG nova.network.neutron [req-19a75de5-1935-487b-96d7-ffbfaccf042b req-785ea1dd-a71c-4232-b521-7c041566ee25 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Refreshing network info cache for port 3c861704-c594-42f8-a5b3-a274ec84650f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.135 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Start _get_guest_xml network_info=[{"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-01T09:50:54Z,direct_url=,disk_format='qcow2',id=a223c2d3-3df7-4d82-921c-31ace200d43c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79df39cba1c14309b68e8b61518619fd',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-01T09:50:55Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'device_type': 'disk', 'encrypted': False, 'image_id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.139 274321 WARNING nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:52:54 localhost systemd[1]: Started libpod-conmon-a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551.scope. Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.147 274321 DEBUG nova.virt.libvirt.host [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Searching host: 'np0005604215.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.148 274321 DEBUG nova.virt.libvirt.host [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.150 274321 DEBUG nova.virt.libvirt.host [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Searching host: 'np0005604215.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.151 274321 DEBUG nova.virt.libvirt.host [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.152 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.153 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-01T09:50:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='04b6d75f-0335-413a-b9d6-dfe49d77feaf',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-01T09:50:54Z,direct_url=,disk_format='qcow2',id=a223c2d3-3df7-4d82-921c-31ace200d43c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79df39cba1c14309b68e8b61518619fd',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-01T09:50:55Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Feb 1 04:52:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v119: 177 pgs: 177 active+clean; 225 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 5.8 MiB/s wr, 274 op/s Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.153 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.154 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.154 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.155 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.156 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.156 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.157 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.158 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Feb 1 04:52:54 localhost systemd[1]: tmp-crun.aDVGmy.mount: Deactivated successfully. Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.158 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.158 274321 DEBUG nova.virt.hardware [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Feb 1 04:52:54 localhost podman[305143]: 2026-02-01 09:52:54.064376194 +0000 UTC m=+0.039548004 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.164 274321 DEBUG nova.privsep.utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.165 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:54 localhost systemd[1]: Started libcrun container. Feb 1 04:52:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b1613e4d327631643e09ac0edff96b26245deaec5c43c3419a3ce4c98fd9cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:52:54 localhost podman[305143]: 2026-02-01 09:52:54.186904214 +0000 UTC m=+0.162075994 container init a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:52:54 localhost podman[305143]: 2026-02-01 09:52:54.197192835 +0000 UTC m=+0.172364635 container start a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 1 04:52:54 localhost dnsmasq[305163]: started, version 2.85 cachesize 150 Feb 1 04:52:54 localhost dnsmasq[305163]: DNS service limited to local subnets Feb 1 04:52:54 localhost dnsmasq[305163]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:52:54 localhost dnsmasq[305163]: warning: no upstream servers configured Feb 1 04:52:54 localhost dnsmasq-dhcp[305163]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:52:54 localhost dnsmasq[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/addn_hosts - 0 addresses Feb 1 04:52:54 localhost dnsmasq-dhcp[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/host Feb 1 04:52:54 localhost dnsmasq-dhcp[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/opts Feb 1 04:52:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:54.389 259225 INFO neutron.agent.dhcp.agent [None req-68b7025a-8cf1-4e5e-9031-f4a37aba59a8 - - - - - -] DHCP configuration for ports {'0b5f5605-6b74-496a-a8dc-57cf160bde76'} is completed#033[00m Feb 1 04:52:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:54.476 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:54 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:52:54 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/776839631' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.569 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.603 274321 DEBUG nova.storage.rbd_utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] rbd image aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:52:54 localhost nova_compute[274317]: 2026-02-01 09:52:54.608 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:52:55 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2689872368' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.058 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.061 274321 DEBUG nova.virt.libvirt.vif [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-01T09:52:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1216472824',display_name='tempest-LiveMigrationTest-server-1216472824',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005604215.localdomain',hostname='tempest-livemigrationtest-server-1216472824',id=8,image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005604215.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005604215.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8e4b0fb12f14fbaa248291aa43aacee',ramdisk_id='',reservation_id='r-w7wsdj02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-266774784',owner_user_name='tempest-LiveMigrationTest-266774784-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-01T09:52:50Z,user_data=None,user_id='0416f10a8d4f4da2a6dc6cbd271a3010',uuid=aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.061 274321 DEBUG nova.network.os_vif_util [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Converting VIF {"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.062 274321 DEBUG nova.network.os_vif_util [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5a:4a,bridge_name='br-int',has_traffic_filtering=True,id=3c861704-c594-42f8-a5b3-a274ec84650f,network=Network(9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3c861704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.065 274321 DEBUG nova.objects.instance [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lazy-loading 'pci_devices' on Instance uuid aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.085 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] End _get_guest_xml xml= Feb 1 04:52:55 localhost nova_compute[274317]: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469 Feb 1 04:52:55 localhost nova_compute[274317]: instance-00000008 Feb 1 04:52:55 localhost nova_compute[274317]: 131072 Feb 1 04:52:55 localhost nova_compute[274317]: 1 Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: tempest-LiveMigrationTest-server-1216472824 Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:54 Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: 128 Feb 1 04:52:55 localhost nova_compute[274317]: 1 Feb 1 04:52:55 localhost nova_compute[274317]: 0 Feb 1 04:52:55 localhost nova_compute[274317]: 0 Feb 1 04:52:55 localhost nova_compute[274317]: 1 Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: tempest-LiveMigrationTest-266774784-project-member Feb 1 04:52:55 localhost nova_compute[274317]: tempest-LiveMigrationTest-266774784 Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: RDO Feb 1 04:52:55 localhost nova_compute[274317]: OpenStack Compute Feb 1 04:52:55 localhost nova_compute[274317]: 27.5.2-0.20260127144738.eaa65f0.el9 Feb 1 04:52:55 localhost nova_compute[274317]: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469 Feb 1 04:52:55 localhost nova_compute[274317]: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469 Feb 1 04:52:55 localhost nova_compute[274317]: Virtual Machine Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: hvm Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: /dev/urandom Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: Feb 1 04:52:55 localhost nova_compute[274317]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.087 274321 DEBUG nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Preparing to wait for external event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.087 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.088 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.088 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.089 274321 DEBUG nova.virt.libvirt.vif [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-01T09:52:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1216472824',display_name='tempest-LiveMigrationTest-server-1216472824',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005604215.localdomain',hostname='tempest-livemigrationtest-server-1216472824',id=8,image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005604215.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005604215.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8e4b0fb12f14fbaa248291aa43aacee',ramdisk_id='',reservation_id='r-w7wsdj02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-266774784',owner_user_name='tempest-LiveMigrationTest-266774784-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-01T09:52:50Z,user_data=None,user_id='0416f10a8d4f4da2a6dc6cbd271a3010',uuid=aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.090 274321 DEBUG nova.network.os_vif_util [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Converting VIF {"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.090 274321 DEBUG nova.network.os_vif_util [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5a:4a,bridge_name='br-int',has_traffic_filtering=True,id=3c861704-c594-42f8-a5b3-a274ec84650f,network=Network(9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3c861704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.091 274321 DEBUG os_vif [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5a:4a,bridge_name='br-int',has_traffic_filtering=True,id=3c861704-c594-42f8-a5b3-a274ec84650f,network=Network(9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3c861704-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.097 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.097 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.098 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.101 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.101 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c861704-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.102 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c861704-c5, col_values=(('external_ids', {'iface-id': '3c861704-c594-42f8-a5b3-a274ec84650f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:5a:4a', 'vm-uuid': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.129 274321 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.129 274321 INFO nova.compute.manager [-] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] VM Stopped (Lifecycle Event)#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.145 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.148 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.150 274321 DEBUG nova.compute.manager [None req-b071a4b8-fd0b-42af-bb18-6c522ddefb5e - - - - - -] [instance: 5aefea54-941a-48bf-ad9e-7f13fdfdb4ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.151 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.152 274321 INFO os_vif [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5a:4a,bridge_name='br-int',has_traffic_filtering=True,id=3c861704-c594-42f8-a5b3-a274ec84650f,network=Network(9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3c861704-c5')#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.291 274321 DEBUG nova.network.neutron [req-19a75de5-1935-487b-96d7-ffbfaccf042b req-785ea1dd-a71c-4232-b521-7c041566ee25 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Updated VIF entry in instance network info cache for port 3c861704-c594-42f8-a5b3-a274ec84650f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.292 274321 DEBUG nova.network.neutron [req-19a75de5-1935-487b-96d7-ffbfaccf042b req-785ea1dd-a71c-4232-b521-7c041566ee25 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Updating instance_info_cache with network_info: [{"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.297 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.297 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.297 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] No VIF found with MAC fa:16:3e:c4:5a:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.298 274321 INFO nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Using config drive#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.328 274321 DEBUG nova.storage.rbd_utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] rbd image aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.334 274321 DEBUG oslo_concurrency.lockutils [req-19a75de5-1935-487b-96d7-ffbfaccf042b req-785ea1dd-a71c-4232-b521-7c041566ee25 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Releasing lock "refresh_cache-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.502 274321 INFO nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Creating config drive at /var/lib/nova/instances/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.config#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.509 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp55raenok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.636 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp55raenok" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.677 274321 DEBUG nova.storage.rbd_utils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] rbd image aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.682 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.config aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.905 274321 DEBUG oslo_concurrency.processutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.config aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.906 274321 INFO nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Deleting local config drive /var/lib/nova/instances/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.config because it was imported into RBD.#033[00m Feb 1 04:52:55 localhost kernel: device tap3c861704-c5 entered promiscuous mode Feb 1 04:52:55 localhost NetworkManager[5972]: [1769939575.9446] manager: (tap3c861704-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/20) Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.947 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.951 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:55 localhost ovn_controller[152787]: 2026-02-01T09:52:55Z|00075|binding|INFO|Claiming lport 3c861704-c594-42f8-a5b3-a274ec84650f for this chassis. Feb 1 04:52:55 localhost ovn_controller[152787]: 2026-02-01T09:52:55Z|00076|binding|INFO|3c861704-c594-42f8-a5b3-a274ec84650f: Claiming fa:16:3e:c4:5a:4a 10.100.0.12 Feb 1 04:52:55 localhost ovn_controller[152787]: 2026-02-01T09:52:55Z|00077|binding|INFO|Claiming lport 9adda630-e8be-4f28-9d6e-88decd53d5c0 for this chassis. Feb 1 04:52:55 localhost ovn_controller[152787]: 2026-02-01T09:52:55Z|00078|binding|INFO|9adda630-e8be-4f28-9d6e-88decd53d5c0: Claiming fa:16:3e:87:8a:c3 19.80.0.117 Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.954 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:55 localhost NetworkManager[5972]: [1769939575.9583] device (tap3c861704-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 1 04:52:55 localhost NetworkManager[5972]: [1769939575.9595] device (tap3c861704-c5): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Feb 1 04:52:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:52:55 localhost ovn_controller[152787]: 2026-02-01T09:52:55Z|00079|ovn_bfd|INFO|Enabled BFD on interface ovn-2186fb-0 Feb 1 04:52:55 localhost ovn_controller[152787]: 2026-02-01T09:52:55Z|00080|ovn_bfd|INFO|Enabled BFD on interface ovn-e1cc33-0 Feb 1 04:52:55 localhost ovn_controller[152787]: 2026-02-01T09:52:55Z|00081|ovn_bfd|INFO|Enabled BFD on interface ovn-45aa31-0 Feb 1 04:52:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:55.975 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:8a:c3 19.80.0.117'], port_security=['fa:16:3e:87:8a:c3 19.80.0.117'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['3c861704-c594-42f8-a5b3-a274ec84650f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-599288938', 'neutron:cidrs': '19.80.0.117/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f10af3d7-b861-4585-95de-68162ae73827', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-599288938', 'neutron:project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c3daae5-f0f3-42a8-b893-8c534dcb0055', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=13e91b2c-4ccc-47a7-a97e-5773902dea41, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=9adda630-e8be-4f28-9d6e-88decd53d5c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:55.976 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5a:4a 10.100.0.12'], port_security=['fa:16:3e:c4:5a:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1236294281', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1236294281', 'neutron:project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c3daae5-f0f3-42a8-b893-8c534dcb0055', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49493626-0ffa-4ff3-a83b-4e74511666de, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=3c861704-c594-42f8-a5b3-a274ec84650f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:52:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:55.977 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9adda630-e8be-4f28-9d6e-88decd53d5c0 in datapath f10af3d7-b861-4585-95de-68162ae73827 bound to our chassis#033[00m Feb 1 04:52:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:55.979 158655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f10af3d7-b861-4585-95de-68162ae73827#033[00m Feb 1 04:52:55 localhost nova_compute[274317]: 2026-02-01 09:52:55.979 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:55.988 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d8eaf9-8bd2-4869-9bb2-d0116fb025e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:55.989 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf10af3d7-b1 in ovnmeta-f10af3d7-b861-4585-95de-68162ae73827 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 1 04:52:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:55.991 303130 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf10af3d7-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 1 04:52:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:55.992 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3e87b7-f8ba-428c-af72-d12356bcdfe0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:55.992 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a33982-fd12-4707-a604-5de8550540f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:55 localhost systemd-machined[202466]: New machine qemu-2-instance-00000008. Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.000 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[e19e21bb-336b-44c3-8a19-871286fa9ab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.015 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000008. Feb 1 04:52:56 localhost ovn_controller[152787]: 2026-02-01T09:52:56Z|00082|binding|INFO|Setting lport 3c861704-c594-42f8-a5b3-a274ec84650f ovn-installed in OVS Feb 1 04:52:56 localhost ovn_controller[152787]: 2026-02-01T09:52:56Z|00083|binding|INFO|Setting lport 3c861704-c594-42f8-a5b3-a274ec84650f up in Southbound Feb 1 04:52:56 localhost ovn_controller[152787]: 2026-02-01T09:52:56Z|00084|binding|INFO|Setting lport 9adda630-e8be-4f28-9d6e-88decd53d5c0 up in Southbound Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.030 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.031 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[a74da3ff-5d86-4342-81fc-96b9bc64eaa5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.035 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.056 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[80378d7c-bd98-4e04-bb8e-2cf682007020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.060 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[75ca2b2a-bd93-47f9-9fb0-568d411baebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost NetworkManager[5972]: [1769939576.0635] manager: (tapf10af3d7-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/21) Feb 1 04:52:56 localhost podman[305295]: 2026-02-01 09:52:56.070199717 +0000 UTC m=+0.086380032 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.078 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost podman[305295]: 2026-02-01 09:52:56.082982596 +0000 UTC m=+0.099162921 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.092 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[340320f1-f27a-4767-a12b-226fcf9dfb17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.094 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[d9094efb-0195-473c-afad-d748073e9496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:52:56 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapf10af3d7-b1: link becomes ready Feb 1 04:52:56 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapf10af3d7-b0: link becomes ready Feb 1 04:52:56 localhost NetworkManager[5972]: [1769939576.1145] device (tapf10af3d7-b0): carrier: link connected Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.118 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[674254f4-3113-44c3-9c42-2a9067c5e1f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.133 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[5f42c68c-e95a-48ac-8563-9ae08fc5ced0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf10af3d7-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:7d:a7:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1166021, 'reachable_time': 43965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305346, 'error': None, 'target': 'ovnmeta-f10af3d7-b861-4585-95de-68162ae73827', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.144 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[f0112b23-2185-4bf1-a773-9443b72e0f1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:a738'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1166021, 'tstamp': 1166021}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305351, 'error': None, 'target': 'ovnmeta-f10af3d7-b861-4585-95de-68162ae73827', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 225 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 9.3 MiB/s rd, 4.8 MiB/s wr, 226 op/s Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.162 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[5816c518-c589-4257-a0b2-d05a56520c25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf10af3d7-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:7d:a7:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1166021, 'reachable_time': 43965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305356, 'error': None, 'target': 'ovnmeta-f10af3d7-b861-4585-95de-68162ae73827', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.180 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[d0340481-3e5f-4320-97d2-77b679e3e137]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.227 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[84c00111-c16e-4750-aa61-295551990670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.229 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf10af3d7-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.230 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.231 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf10af3d7-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.276 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost kernel: device tapf10af3d7-b0 entered promiscuous mode Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.283 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf10af3d7-b0, col_values=(('external_ids', {'iface-id': '2795e61c-14bf-4981-8534-106e0ef1f6ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.284 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost ovn_controller[152787]: 2026-02-01T09:52:56Z|00085|binding|INFO|Releasing lport 2795e61c-14bf-4981-8534-106e0ef1f6ea from this chassis (sb_readonly=0) Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.286 158655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f10af3d7-b861-4585-95de-68162ae73827.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f10af3d7-b861-4585-95de-68162ae73827.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.286 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.288 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ab15ac58-fe11-49fb-b0f1-a6ceaba9f12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.289 158655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: global Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: log /dev/log local0 debug Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: log-tag haproxy-metadata-proxy-f10af3d7-b861-4585-95de-68162ae73827 Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: user root Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: group root Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: maxconn 1024 Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: pidfile /var/lib/neutron/external/pids/f10af3d7-b861-4585-95de-68162ae73827.pid.haproxy Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: daemon Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: defaults Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: log global Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: mode http Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: option httplog Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: option dontlognull Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: option http-server-close Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: option forwardfor Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: retries 3 Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: timeout http-request 30s Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: timeout connect 30s Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: timeout client 32s Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: timeout server 32s Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: timeout http-keep-alive 30s Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: listen listener Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: bind 169.254.169.254:80 Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: server metadata /var/lib/neutron/metadata_proxy Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: http-request add-header X-OVN-Network-ID f10af3d7-b861-4585-95de-68162ae73827 Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.292 158655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f10af3d7-b861-4585-95de-68162ae73827', 'env', 'PROCESS_TAG=haproxy-f10af3d7-b861-4585-95de-68162ae73827', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f10af3d7-b861-4585-95de-68162ae73827.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.295 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.398 274321 DEBUG nova.virt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.398 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] VM Started (Lifecycle Event)#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.406 274321 DEBUG nova.compute.manager [req-69da1a6f-bec4-4b06-bf43-a949b5378ef3 req-de015c3c-5117-41a8-98de-7ea5a251312b 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.407 274321 DEBUG oslo_concurrency.lockutils [req-69da1a6f-bec4-4b06-bf43-a949b5378ef3 req-de015c3c-5117-41a8-98de-7ea5a251312b 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.407 274321 DEBUG oslo_concurrency.lockutils [req-69da1a6f-bec4-4b06-bf43-a949b5378ef3 req-de015c3c-5117-41a8-98de-7ea5a251312b 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.408 274321 DEBUG oslo_concurrency.lockutils [req-69da1a6f-bec4-4b06-bf43-a949b5378ef3 req-de015c3c-5117-41a8-98de-7ea5a251312b 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.408 274321 DEBUG nova.compute.manager [req-69da1a6f-bec4-4b06-bf43-a949b5378ef3 req-de015c3c-5117-41a8-98de-7ea5a251312b 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Processing event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.409 274321 DEBUG nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.418 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.422 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.426 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 1 04:52:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e101 e101: 6 total, 6 up, 6 in Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.429 274321 INFO nova.virt.libvirt.driver [-] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Instance spawned successfully.#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.430 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.450 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.451 274321 DEBUG nova.virt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.451 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] VM Paused (Lifecycle Event)#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.457 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.457 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.458 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.459 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.459 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.460 274321 DEBUG nova.virt.libvirt.driver [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.466 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.470 274321 DEBUG nova.virt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.470 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] VM Resumed (Lifecycle Event)#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.496 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.499 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.518 274321 INFO nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Took 5.74 seconds to spawn the instance on the hypervisor.#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.519 274321 DEBUG nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.527 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.587 274321 INFO nova.compute.manager [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Took 6.64 seconds to build instance.#033[00m Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.615 274321 DEBUG oslo_concurrency.lockutils [None req-3f432c8d-f909-45bd-ad41-3a2b87f40070 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 6.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:56 localhost podman[305424]: Feb 1 04:52:56 localhost podman[305424]: 2026-02-01 09:52:56.700701832 +0000 UTC m=+0.089219993 container create 18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:52:56 localhost systemd[1]: Started libpod-conmon-18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd.scope. Feb 1 04:52:56 localhost podman[305424]: 2026-02-01 09:52:56.659105835 +0000 UTC m=+0.047624066 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 1 04:52:56 localhost systemd[1]: Started libcrun container. Feb 1 04:52:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e3833c7d8d2b2c1fabb013d6f217a0b7dde45ed475f41dc07e52f74eb93e56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:52:56 localhost podman[305424]: 2026-02-01 09:52:56.786041041 +0000 UTC m=+0.174559192 container init 18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:52:56 localhost podman[305424]: 2026-02-01 09:52:56.796088375 +0000 UTC m=+0.184606526 container start 18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127) Feb 1 04:52:56 localhost neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827[305437]: [NOTICE] (305441) : New worker (305443) forked Feb 1 04:52:56 localhost neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827[305437]: [NOTICE] (305441) : Loading success. Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.855 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 3c861704-c594-42f8-a5b3-a274ec84650f in datapath 9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8 unbound from our chassis#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.858 158655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.868 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[21855e73-97c1-466e-8b29-3409d726e1b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.868 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9acb9cb3-f1 in ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.871 303130 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9acb9cb3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.872 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba19668-d160-4d96-9ad7-d5112e5f4bd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.873 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[d2916b0a-7ed7-482a-ba0e-87246f3d6af3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.880 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6dd96e-0960-4aa9-9a71-68f777645630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.893 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[629a1e60-85f7-4d6e-b2ce-15aba309f732]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_controller[152787]: 2026-02-01T09:52:56Z|00086|binding|INFO|Releasing lport 2795e61c-14bf-4981-8534-106e0ef1f6ea from this chassis (sb_readonly=0) Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.899 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost ovn_controller[152787]: 2026-02-01T09:52:56Z|00087|binding|INFO|Releasing lport 2795e61c-14bf-4981-8534-106e0ef1f6ea from this chassis (sb_readonly=0) Feb 1 04:52:56 localhost nova_compute[274317]: 2026-02-01 09:52:56.910 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.922 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[a8200184-0f60-4d7b-a43c-e188bff322ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost NetworkManager[5972]: [1769939576.9320] manager: (tap9acb9cb3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/22) Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.930 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[71451b2a-ff68-4260-aa2b-b991675f2389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost systemd-udevd[305332]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.970 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[c0599f5c-f1ee-4633-8967-3df41800fa7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:56 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:56.979 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf0b51a-450b-4348-a0b2-c8b3f82dec73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:57 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9acb9cb3-f0: link becomes ready Feb 1 04:52:57 localhost NetworkManager[5972]: [1769939577.0036] device (tap9acb9cb3-f0): carrier: link connected Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.009 304214 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f27ee9-e3b6-4638-8915-2e93ffb3fba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.029 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[1c42d1fa-610a-485d-a4c0-424f31530c1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9acb9cb3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:3c:11:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1166110, 'reachable_time': 27192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305464, 'error': None, 'target': 'ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:57 localhost ovn_controller[152787]: 2026-02-01T09:52:57Z|00088|binding|INFO|Releasing lport 2795e61c-14bf-4981-8534-106e0ef1f6ea from this chassis (sb_readonly=0) Feb 1 04:52:57 localhost nova_compute[274317]: 2026-02-01 09:52:57.043 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.052 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ddeb8f83-394d-42c3-8d03-aa01db9990b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:1150'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1166110, 'tstamp': 1166110}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305465, 'error': None, 'target': 'ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.073 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[301ffce0-aa8c-4cec-92b9-bc237d244f78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9acb9cb3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:3c:11:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1166110, 'reachable_time': 27192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305466, 'error': None, 'target': 'ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.103 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3d3c33-e78a-45d9-8e13-89e8b9ed11e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.165 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[736bf7d0-c6dc-4460-8fd5-446be0007abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.167 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9acb9cb3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.167 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.168 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9acb9cb3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:57 localhost kernel: device tap9acb9cb3-f0 entered promiscuous mode Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.177 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9acb9cb3-f0, col_values=(('external_ids', {'iface-id': '82d12955-5666-45d9-bcd4-64e768a2aca1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:52:57 localhost nova_compute[274317]: 2026-02-01 09:52:57.170 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:57 localhost ovn_controller[152787]: 2026-02-01T09:52:57Z|00089|binding|INFO|Releasing lport 82d12955-5666-45d9-bcd4-64e768a2aca1 from this chassis (sb_readonly=0) Feb 1 04:52:57 localhost nova_compute[274317]: 2026-02-01 09:52:57.193 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.194 158655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.196 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[dd438c51-97c3-4071-bfd6-17e67ff0da3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.197 158655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: global Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: log /dev/log local0 debug Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: log-tag haproxy-metadata-proxy-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8 Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: user root Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: group root Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: maxconn 1024 Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: pidfile /var/lib/neutron/external/pids/9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8.pid.haproxy Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: daemon Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: defaults Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: log global Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: mode http Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: option httplog Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: option dontlognull Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: option http-server-close Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: option forwardfor Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: retries 3 Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: timeout http-request 30s Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: timeout connect 30s Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: timeout client 32s Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: timeout server 32s Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: timeout http-keep-alive 30s Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: listen listener Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: bind 169.254.169.254:80 Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: server metadata /var/lib/neutron/metadata_proxy Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: http-request add-header X-OVN-Network-ID 9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8 Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 1 04:52:57 localhost ovn_metadata_agent[158650]: 2026-02-01 09:52:57.199 158655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8', 'env', 'PROCESS_TAG=haproxy-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 1 04:52:57 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:57.343 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:52:57Z, description=, device_id=a5140f30-05dc-4871-8e32-f21b0cfb774b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1f774af9-27f9-4f7f-a2be-1d66f28cfc73, ip_allocation=immediate, mac_address=fa:16:3e:c2:d0:66, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:52:50Z, description=, dns_domain=, id=fdca6946-14e8-4692-9d79-41002e703846, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1230844200-network, port_security_enabled=True, project_id=41697a815dfa4c5aaae37b529f6303e1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7344, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=598, status=ACTIVE, subnets=['2ecf4d67-5d58-4ddd-8fc7-11233acff6bf'], tags=[], tenant_id=41697a815dfa4c5aaae37b529f6303e1, updated_at=2026-02-01T09:52:52Z, vlan_transparent=None, network_id=fdca6946-14e8-4692-9d79-41002e703846, port_security_enabled=False, project_id=41697a815dfa4c5aaae37b529f6303e1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=630, status=DOWN, tags=[], tenant_id=41697a815dfa4c5aaae37b529f6303e1, updated_at=2026-02-01T09:52:57Z on network fdca6946-14e8-4692-9d79-41002e703846#033[00m Feb 1 04:52:57 localhost dnsmasq[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/addn_hosts - 1 addresses Feb 1 04:52:57 localhost dnsmasq-dhcp[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/host Feb 1 04:52:57 localhost dnsmasq-dhcp[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/opts Feb 1 04:52:57 localhost podman[305506]: 2026-02-01 09:52:57.533130409 +0000 UTC m=+0.060608381 container kill a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:52:57 localhost podman[305527]: Feb 1 04:52:57 localhost podman[305527]: 2026-02-01 09:52:57.600393476 +0000 UTC m=+0.075372821 container create 20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3) Feb 1 04:52:57 localhost systemd[1]: Started libpod-conmon-20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac.scope. Feb 1 04:52:57 localhost systemd[1]: Started libcrun container. Feb 1 04:52:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39afd0bb396a392dfd50d36fe6caf2b1c9a1e9797d65ee8ff3803b1095d1a5f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:52:57 localhost podman[305527]: 2026-02-01 09:52:57.662496381 +0000 UTC m=+0.137475746 container init 20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:52:57 localhost podman[305527]: 2026-02-01 09:52:57.566727006 +0000 UTC m=+0.041706361 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 1 04:52:57 localhost podman[305527]: 2026-02-01 09:52:57.673186665 +0000 UTC m=+0.148166010 container start 20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:52:57 localhost neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8[305550]: [NOTICE] (305555) : New worker (305557) forked Feb 1 04:52:57 localhost neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8[305550]: [NOTICE] (305555) : Loading success. Feb 1 04:52:57 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:57.746 259225 INFO neutron.agent.dhcp.agent [None req-64c0cadf-4783-4e2c-97ad-47f8c4c4f09c - - - - - -] DHCP configuration for ports {'1f774af9-27f9-4f7f-a2be-1d66f28cfc73'} is completed#033[00m Feb 1 04:52:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 2.2 MiB/s wr, 191 op/s Feb 1 04:52:58 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:58.194 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:52:57Z, description=, device_id=a5140f30-05dc-4871-8e32-f21b0cfb774b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1f774af9-27f9-4f7f-a2be-1d66f28cfc73, ip_allocation=immediate, mac_address=fa:16:3e:c2:d0:66, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:52:50Z, description=, dns_domain=, id=fdca6946-14e8-4692-9d79-41002e703846, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1230844200-network, port_security_enabled=True, project_id=41697a815dfa4c5aaae37b529f6303e1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7344, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=598, status=ACTIVE, subnets=['2ecf4d67-5d58-4ddd-8fc7-11233acff6bf'], tags=[], tenant_id=41697a815dfa4c5aaae37b529f6303e1, updated_at=2026-02-01T09:52:52Z, vlan_transparent=None, network_id=fdca6946-14e8-4692-9d79-41002e703846, port_security_enabled=False, project_id=41697a815dfa4c5aaae37b529f6303e1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=630, status=DOWN, tags=[], tenant_id=41697a815dfa4c5aaae37b529f6303e1, updated_at=2026-02-01T09:52:57Z on network fdca6946-14e8-4692-9d79-41002e703846#033[00m Feb 1 04:52:58 localhost dnsmasq[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/addn_hosts - 1 addresses Feb 1 04:52:58 localhost dnsmasq-dhcp[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/host Feb 1 04:52:58 localhost podman[305582]: 2026-02-01 09:52:58.403084476 +0000 UTC m=+0.061622842 container kill a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:52:58 localhost dnsmasq-dhcp[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/opts Feb 1 04:52:58 localhost nova_compute[274317]: 2026-02-01 09:52:58.466 274321 DEBUG nova.compute.manager [req-e6a913f1-11f9-4c84-b43e-7d6da1f11800 req-5ce59bf2-6e97-481c-b3a9-de5f6a93d356 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:52:58 localhost nova_compute[274317]: 2026-02-01 09:52:58.467 274321 DEBUG oslo_concurrency.lockutils [req-e6a913f1-11f9-4c84-b43e-7d6da1f11800 req-5ce59bf2-6e97-481c-b3a9-de5f6a93d356 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:52:58 localhost nova_compute[274317]: 2026-02-01 09:52:58.468 274321 DEBUG oslo_concurrency.lockutils [req-e6a913f1-11f9-4c84-b43e-7d6da1f11800 req-5ce59bf2-6e97-481c-b3a9-de5f6a93d356 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:52:58 localhost nova_compute[274317]: 2026-02-01 09:52:58.468 274321 DEBUG oslo_concurrency.lockutils [req-e6a913f1-11f9-4c84-b43e-7d6da1f11800 req-5ce59bf2-6e97-481c-b3a9-de5f6a93d356 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:52:58 localhost nova_compute[274317]: 2026-02-01 09:52:58.468 274321 DEBUG nova.compute.manager [req-e6a913f1-11f9-4c84-b43e-7d6da1f11800 req-5ce59bf2-6e97-481c-b3a9-de5f6a93d356 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] No waiting events found dispatching network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:52:58 localhost nova_compute[274317]: 2026-02-01 09:52:58.469 274321 WARNING nova.compute.manager [req-e6a913f1-11f9-4c84-b43e-7d6da1f11800 req-5ce59bf2-6e97-481c-b3a9-de5f6a93d356 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received unexpected event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f for instance with vm_state active and task_state None.#033[00m Feb 1 04:52:58 localhost ovn_controller[152787]: 2026-02-01T09:52:58Z|00090|binding|INFO|Releasing lport 2795e61c-14bf-4981-8534-106e0ef1f6ea from this chassis (sb_readonly=0) Feb 1 04:52:58 localhost ovn_controller[152787]: 2026-02-01T09:52:58Z|00091|binding|INFO|Releasing lport 82d12955-5666-45d9-bcd4-64e768a2aca1 from this chassis (sb_readonly=0) Feb 1 04:52:58 localhost nova_compute[274317]: 2026-02-01 09:52:58.578 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:58 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:52:58.632 259225 INFO neutron.agent.dhcp.agent [None req-13614b67-4779-40c7-8e30-40a340638019 - - - - - -] DHCP configuration for ports {'1f774af9-27f9-4f7f-a2be-1d66f28cfc73'} is completed#033[00m Feb 1 04:52:59 localhost nova_compute[274317]: 2026-02-01 09:52:59.414 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:52:59 localhost nova_compute[274317]: 2026-02-01 09:52:59.755 274321 DEBUG nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Check if temp file /var/lib/nova/instances/tmp58hd61t0 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m Feb 1 04:52:59 localhost nova_compute[274317]: 2026-02-01 09:52:59.755 274321 DEBUG nova.compute.manager [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] source check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp58hd61t0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m Feb 1 04:53:00 localhost podman[236852]: time="2026-02-01T09:53:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:53:00 localhost podman[236852]: @ - - [01/Feb/2026:09:53:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159554 "" "Go-http-client/1.1" Feb 1 04:53:00 localhost podman[236852]: @ - - [01/Feb/2026:09:53:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19704 "" "Go-http-client/1.1" Feb 1 04:53:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 2.1 MiB/s wr, 191 op/s Feb 1 04:53:00 localhost nova_compute[274317]: 2026-02-01 09:53:00.170 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:01 localhost dnsmasq[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/addn_hosts - 0 addresses Feb 1 04:53:01 localhost dnsmasq-dhcp[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/host Feb 1 04:53:01 localhost dnsmasq-dhcp[305163]: read /var/lib/neutron/dhcp/fdca6946-14e8-4692-9d79-41002e703846/opts Feb 1 04:53:01 localhost podman[305621]: 2026-02-01 09:53:01.164459721 +0000 UTC m=+0.055448780 container kill a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:53:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:53:01 localhost systemd[1]: tmp-crun.691ndm.mount: Deactivated successfully. Feb 1 04:53:01 localhost nova_compute[274317]: 2026-02-01 09:53:01.308 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:01 localhost podman[305634]: 2026-02-01 09:53:01.312194226 +0000 UTC m=+0.126941678 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:53:01 localhost podman[305634]: 2026-02-01 09:53:01.318632987 +0000 UTC m=+0.133380429 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:53:01 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:53:01 localhost kernel: device tap7ad39b92-32 left promiscuous mode Feb 1 04:53:01 localhost ovn_controller[152787]: 2026-02-01T09:53:01Z|00092|binding|INFO|Releasing lport 7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f from this chassis (sb_readonly=0) Feb 1 04:53:01 localhost ovn_controller[152787]: 2026-02-01T09:53:01Z|00093|binding|INFO|Setting lport 7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f down in Southbound Feb 1 04:53:01 localhost nova_compute[274317]: 2026-02-01 09:53:01.447 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:01 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:01.458 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-fdca6946-14e8-4692-9d79-41002e703846', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdca6946-14e8-4692-9d79-41002e703846', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41697a815dfa4c5aaae37b529f6303e1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=013c5d80-ec0c-4f6b-91c1-a2283198de95, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:53:01 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:01.460 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 7ad39b92-32e7-4263-9dbc-bbb8eeb03c9f in datapath fdca6946-14e8-4692-9d79-41002e703846 unbound from our chassis#033[00m Feb 1 04:53:01 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:01.464 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fdca6946-14e8-4692-9d79-41002e703846, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:53:01 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:01.465 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[fb167eaa-06f5-4722-960f-a6882b40f940]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:01 localhost nova_compute[274317]: 2026-02-01 09:53:01.465 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:01 localhost nova_compute[274317]: 2026-02-01 09:53:01.466 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:01 localhost openstack_network_exporter[239388]: ERROR 09:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:53:01 localhost openstack_network_exporter[239388]: Feb 1 04:53:01 localhost openstack_network_exporter[239388]: ERROR 09:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:53:01 localhost openstack_network_exporter[239388]: Feb 1 04:53:02 localhost nova_compute[274317]: 2026-02-01 09:53:02.081 274321 DEBUG nova.compute.manager [req-e96ac434-6161-40f5-9851-d0f1a0fc83e7 req-b8854354-eb8c-4756-adb8-5563a2e78b07 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-unplugged-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:53:02 localhost nova_compute[274317]: 2026-02-01 09:53:02.082 274321 DEBUG oslo_concurrency.lockutils [req-e96ac434-6161-40f5-9851-d0f1a0fc83e7 req-b8854354-eb8c-4756-adb8-5563a2e78b07 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:02 localhost nova_compute[274317]: 2026-02-01 09:53:02.082 274321 DEBUG oslo_concurrency.lockutils [req-e96ac434-6161-40f5-9851-d0f1a0fc83e7 req-b8854354-eb8c-4756-adb8-5563a2e78b07 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:02 localhost nova_compute[274317]: 2026-02-01 09:53:02.082 274321 DEBUG oslo_concurrency.lockutils [req-e96ac434-6161-40f5-9851-d0f1a0fc83e7 req-b8854354-eb8c-4756-adb8-5563a2e78b07 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:02 localhost nova_compute[274317]: 2026-02-01 09:53:02.083 274321 DEBUG nova.compute.manager [req-e96ac434-6161-40f5-9851-d0f1a0fc83e7 req-b8854354-eb8c-4756-adb8-5563a2e78b07 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] No waiting events found dispatching network-vif-unplugged-3c861704-c594-42f8-a5b3-a274ec84650f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:53:02 localhost nova_compute[274317]: 2026-02-01 09:53:02.083 274321 DEBUG nova.compute.manager [req-e96ac434-6161-40f5-9851-d0f1a0fc83e7 req-b8854354-eb8c-4756-adb8-5563a2e78b07 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-unplugged-3c861704-c594-42f8-a5b3-a274ec84650f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 1 04:53:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 2.1 MiB/s wr, 191 op/s Feb 1 04:53:02 localhost nova_compute[274317]: 2026-02-01 09:53:02.987 274321 INFO nova.compute.manager [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Took 2.76 seconds for pre_live_migration on destination host np0005604213.localdomain.#033[00m Feb 1 04:53:02 localhost nova_compute[274317]: 2026-02-01 09:53:02.988 274321 DEBUG nova.compute.manager [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.001 274321 DEBUG nova.compute.manager [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp58hd61t0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(ea09c78d-8a1e-497d-978c-c737a6e34821),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.003 274321 DEBUG nova.objects.instance [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Lazy-loading 'migration_context' on Instance uuid aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.004 274321 DEBUG nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.006 274321 DEBUG nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.006 274321 DEBUG nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.015 274321 DEBUG nova.virt.libvirt.vif [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-01T09:52:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1216472824',display_name='tempest-LiveMigrationTest-server-1216472824',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005604215.localdomain',hostname='tempest-livemigrationtest-server-1216472824',id=8,image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-01T09:52:56Z,launched_on='np0005604215.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005604215.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d8e4b0fb12f14fbaa248291aa43aacee',ramdisk_id='',reservation_id='r-w7wsdj02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-266774784',owner_user_name='tempest-LiveMigrationTest-266774784-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2026-02-01T09:52:56Z,user_data=None,user_id='0416f10a8d4f4da2a6dc6cbd271a3010',uuid=aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.015 274321 DEBUG nova.network.os_vif_util [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Converting VIF {"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.016 274321 DEBUG nova.network.os_vif_util [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5a:4a,bridge_name='br-int',has_traffic_filtering=True,id=3c861704-c594-42f8-a5b3-a274ec84650f,network=Network(9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3c861704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.016 274321 DEBUG nova.virt.libvirt.migration [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Updating guest XML with vif config: Feb 1 04:53:03 localhost nova_compute[274317]: Feb 1 04:53:03 localhost nova_compute[274317]: Feb 1 04:53:03 localhost nova_compute[274317]: Feb 1 04:53:03 localhost nova_compute[274317]: Feb 1 04:53:03 localhost nova_compute[274317]: Feb 1 04:53:03 localhost nova_compute[274317]: Feb 1 04:53:03 localhost nova_compute[274317]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.017 274321 DEBUG nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m Feb 1 04:53:03 localhost ovn_controller[152787]: 2026-02-01T09:53:03Z|00094|binding|INFO|Releasing lport 2795e61c-14bf-4981-8534-106e0ef1f6ea from this chassis (sb_readonly=0) Feb 1 04:53:03 localhost ovn_controller[152787]: 2026-02-01T09:53:03Z|00095|binding|INFO|Releasing lport 82d12955-5666-45d9-bcd4-64e768a2aca1 from this chassis (sb_readonly=0) Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.338 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:03 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e102 e102: 6 total, 6 up, 6 in Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.508 274321 DEBUG nova.virt.libvirt.migration [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.509 274321 INFO nova.virt.libvirt.migration [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m Feb 1 04:53:03 localhost nova_compute[274317]: 2026-02-01 09:53:03.585 274321 INFO nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m Feb 1 04:53:03 localhost systemd[1]: tmp-crun.bQwlM9.mount: Deactivated successfully. Feb 1 04:53:03 localhost dnsmasq[305163]: exiting on receipt of SIGTERM Feb 1 04:53:03 localhost podman[305685]: 2026-02-01 09:53:03.779006139 +0000 UTC m=+0.082273156 container kill a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:53:03 localhost systemd[1]: libpod-a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551.scope: Deactivated successfully. Feb 1 04:53:03 localhost podman[305698]: 2026-02-01 09:53:03.857049541 +0000 UTC m=+0.063348756 container died a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:53:03 localhost podman[305698]: 2026-02-01 09:53:03.896237803 +0000 UTC m=+0.102536998 container cleanup a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:53:03 localhost systemd[1]: libpod-conmon-a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551.scope: Deactivated successfully. Feb 1 04:53:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:03.913 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}b2254b33e02cd88b333d7a2648b2a1c4e56223d8c3a05b6047a2f46f1c9b1e9f" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Feb 1 04:53:03 localhost podman[305700]: 2026-02-01 09:53:03.957857444 +0000 UTC m=+0.157395807 container remove a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca6946-14e8-4692-9d79-41002e703846, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:53:03 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:53:03.993 259225 INFO neutron.agent.dhcp.agent [None req-aab5f074-fbc2-408b-94c6-8297f84ed382 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:53:04 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:53:04.004 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.087 274321 DEBUG nova.virt.libvirt.migration [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.088 274321 DEBUG nova.virt.libvirt.migration [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.131 274321 DEBUG nova.compute.manager [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.131 274321 DEBUG oslo_concurrency.lockutils [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.132 274321 DEBUG oslo_concurrency.lockutils [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.132 274321 DEBUG oslo_concurrency.lockutils [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.132 274321 DEBUG nova.compute.manager [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] No waiting events found dispatching network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.133 274321 WARNING nova.compute.manager [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received unexpected event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f for instance with vm_state active and task_state migrating.#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.133 274321 DEBUG nova.compute.manager [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-changed-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.133 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Sun, 01 Feb 2026 09:53:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ac906038-eb2a-46f7-b146-08f2a10e1e76 x-openstack-request-id: req-ac906038-eb2a-46f7-b146-08f2a10e1e76 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.133 274321 DEBUG nova.compute.manager [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Refreshing instance network info cache due to event network-changed-3c861704-c594-42f8-a5b3-a274ec84650f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.134 274321 DEBUG oslo_concurrency.lockutils [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "refresh_cache-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.134 274321 DEBUG oslo_concurrency.lockutils [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquired lock "refresh_cache-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.135 274321 DEBUG nova.network.neutron [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Refreshing network info cache for port 3c861704-c594-42f8-a5b3-a274ec84650f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.135 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "04b6d75f-0335-413a-b9d6-dfe49d77feaf", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/04b6d75f-0335-413a-b9d6-dfe49d77feaf"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/04b6d75f-0335-413a-b9d6-dfe49d77feaf"}]}, {"id": "371ff7cc-43c7-4354-b1ce-55c23740c8c8", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/371ff7cc-43c7-4354-b1ce-55c23740c8c8"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/371ff7cc-43c7-4354-b1ce-55c23740c8c8"}]}, {"id": "d824a107-9738-4ab8-b2ca-4ac633695018", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/d824a107-9738-4ab8-b2ca-4ac633695018"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/d824a107-9738-4ab8-b2ca-4ac633695018"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.136 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-ac906038-eb2a-46f7-b146-08f2a10e1e76 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.141 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/04b6d75f-0335-413a-b9d6-dfe49d77feaf -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}b2254b33e02cd88b333d7a2648b2a1c4e56223d8c3a05b6047a2f46f1c9b1e9f" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Feb 1 04:53:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v126: 177 pgs: 177 active+clean; 273 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 215 op/s Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.157 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Sun, 01 Feb 2026 09:53:04 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c644d01c-69d5-4b53-b4bc-f8743be220ca x-openstack-request-id: req-c644d01c-69d5-4b53-b4bc-f8743be220ca _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.157 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "04b6d75f-0335-413a-b9d6-dfe49d77feaf", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/04b6d75f-0335-413a-b9d6-dfe49d77feaf"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/04b6d75f-0335-413a-b9d6-dfe49d77feaf"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.158 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/04b6d75f-0335-413a-b9d6-dfe49d77feaf used request id req-c644d01c-69d5-4b53-b4bc-f8743be220ca request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.160 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'name': 'tempest-LiveMigrationTest-server-1216472824', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000008', 'OS-EXT-SRV-ATTR:host': 'np0005604215.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'hostId': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.161 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.194 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.196 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5528538a-368c-4c7d-9d75-add6c9fb4342', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 23775232, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-vda', 'timestamp': '2026-02-01T09:53:04.161479', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'cd675d96-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '468acf47711a1fc14d210938698e9dbc603ab512f200bb64f7574fe47737939c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2048, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-sda', 'timestamp': '2026-02-01T09:53:04.161479', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'cd678370-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '5a20c2fdc2eb8f524278d2f88a476a7b8741106bae07acd7ec24f4f24f8e6cde'}]}, 'timestamp': '2026-02-01 09:53:04.197590', '_unique_id': '838aaabc62cf4218ac5c4234d7c0020e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.204 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.211 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.215 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469 / tap3c861704-c5 inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.216 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '74abe0bb-1ba9-46fd-afbc-4da676f63708', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 90, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.212219', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd6a984e-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': 'c1bdd51776f50bab757749a0b0937cbe56d945eb140045f07f77218b70178eda'}]}, 'timestamp': '2026-02-01 09:53:04.217163', '_unique_id': '82f76ac227b14c45afed15d87236e108'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.218 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.223 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.223 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.224 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '64467bae-b96d-4e5a-9806-d1c0f9e48230', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-vda', 'timestamp': '2026-02-01T09:53:04.223414', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'cd6ba9fa-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '7bd4ea2e1003a957a1f59d10624a2a6269d823331f77be7dfa4d06ec0f3e6b1e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-sda', 'timestamp': '2026-02-01T09:53:04.223414', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'cd6bc494-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': 'be76554aaa01d17ed8dca31ae100ed6fe83d3076f33053ee192b439595284d10'}]}, 'timestamp': '2026-02-01 09:53:04.224875', '_unique_id': '78d938bfc6b6469c8775d0955edfd178'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.232 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.233 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.261 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/cpu volume: 7270000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '81de59f9-9754-4bd0-9bad-e705b8cabdf9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 7270000000, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'timestamp': '2026-02-01T09:53:04.234036', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'cpu_number': 1}, 'message_id': 'cd71768c-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.439842278, 'message_signature': '4b3144f5f03408e0f58a42e1fe954631a3d62006026100399012f717f8cd1b1d'}]}, 'timestamp': '2026-02-01 09:53:04.262186', '_unique_id': 'c7eb0e8b1e6d40ec9a87df95eaffa599'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.263 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.264 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.264 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.264 12 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469: ceilometer.compute.pollsters.NoVolumeException Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.264 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.265 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5f6692a3-8708-41ee-84f8-9ff5a50da426', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.264998', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd71f86e-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': '57011b6aff5cfa9a68141fab76f50014fbef1c404896a44f8f717e0bc731d852'}]}, 'timestamp': '2026-02-01 09:53:04.265440', '_unique_id': '5f280fc992354ae38ed3d395a1b318f7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.266 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.267 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.280 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.281 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a2f400fe-3bba-4f38-8ecf-5ac39d14a1fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-vda', 'timestamp': '2026-02-01T09:53:04.267126', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'cd7470d0-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.446037592, 'message_signature': '799fa6377774f37bc9701c6401c5b28e3b0c6b1e29d65b068c86470f4d49c51d'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-sda', 'timestamp': '2026-02-01T09:53:04.267126', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'cd7489f8-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.446037592, 'message_signature': 'd911dd07f48fa8129e137d6097f9657e824c26f93b65f5712c754fddad40d28a'}]}, 'timestamp': '2026-02-01 09:53:04.282593', '_unique_id': '7ba7089bff8240558562841a8c636786'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.284 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.286 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.287 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.287 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '59a941c0-e733-48d5-a361-da8aaff70f18', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-vda', 'timestamp': '2026-02-01T09:53:04.287230', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'cd756274-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '72d026008701cda96b188d8eb7d34fbe741d7ac37618791cdd4a99d7753f9df9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-sda', 'timestamp': '2026-02-01T09:53:04.287230', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'cd757a0c-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '6862f9dff4d98d821ee4b2da85bf9f8605eedffd2dee63d1923e2e511c7c2e6a'}]}, 'timestamp': '2026-02-01 09:53:04.288547', '_unique_id': 'f162552d305742a7983c9fc9a06215dc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.290 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.292 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.292 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.293 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8b9c1338-e2b2-4944-9116-9952d43e5cea', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-vda', 'timestamp': '2026-02-01T09:53:04.292904', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'cd763f14-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.446037592, 'message_signature': 'f961a6392974dfcac3cb0614937948a3b91ed2efa7198bc28b66a43f06d25d56'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-sda', 'timestamp': '2026-02-01T09:53:04.292904', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'cd765d00-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.446037592, 'message_signature': 'f8a6f606fd2ba17b9bc284cf81e18f71dde47be1640197506904b60fce33acb7'}]}, 'timestamp': '2026-02-01 09:53:04.294281', '_unique_id': 'e6bc695eddfb4354a17233cfb0bddb85'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.295 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.297 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.298 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.298 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.299 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.300 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.300 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.301 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.301 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5de2ed71-b664-46a8-80d3-b8d58e4591fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.301533', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd77a278-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': '11b87d519f6d74a9f283fd2553f4425b7a8fb190817e97785b8590bf5aada41c'}]}, 'timestamp': '2026-02-01 09:53:04.302634', '_unique_id': '5554b36b7e6d4cfb946ad26327cfc9dd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.303 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.306 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.307 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5579acb3-7299-4643-935e-2f1cb5a62318', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 1, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.307885', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd789322-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': '08ff07aebad11774bf0787592d45441eec91e8fe8274cdfdcca69f1aa4010527'}]}, 'timestamp': '2026-02-01 09:53:04.308804', '_unique_id': '9c10451b948a4ebf81458b2398ae1a36'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.312 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.315 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.316 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.317 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3d5319c4-c281-4654-8058-ce89a91d4a2a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-vda', 'timestamp': '2026-02-01T09:53:04.316071', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'cd79e6be-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '79a5d4a4de8559cd740d3057d75626b8afed5bc8a91fcaa92764226722eca2ec'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-sda', 'timestamp': '2026-02-01T09:53:04.316071', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'cd7a13b4-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': 'b78d4f8853ac31393d138861a4294908aca3b4712eaedf8af68c4cb33a7e1bf8'}]}, 'timestamp': '2026-02-01 09:53:04.319168', '_unique_id': 'd997cd6ca05c4890ace277d44b5195f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.320 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.322 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.322 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a2a02e14-8ee5-419c-acc1-30dbdaa12530', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.322920', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd7ad268-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': '3d9edf7634ad03a6a3ef746cbe2a4dfb618c08eb06a86f300c36234c9fc8d398'}]}, 'timestamp': '2026-02-01 09:53:04.323415', '_unique_id': 'e50dd316d2e84eee9190fb05baf0c490'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.324 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.326 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.329 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.328 274321 DEBUG nova.virt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.329 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] VM Paused (Lifecycle Event)#033[00m Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '210a017c-36a5-48d3-94bf-ed32bf1221ac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.329055', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd7bc7d6-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': '9380d86c7a9aeab58f9542fd7ec1e6b091c97028df172d0d86eca583d2ef2f99'}]}, 'timestamp': '2026-02-01 09:53:04.329720', '_unique_id': '29f1143dc4a04cd88f88f7bae3d1c114'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.331 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a4131749-ecfe-4bb3-93c5-0d6a21f257df', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.331974', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd7c2fc8-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': '59b1d7bc23f861eac2a345c4a13af723ce06200c275609028cdf2dd5a497aad4'}]}, 'timestamp': '2026-02-01 09:53:04.332361', '_unique_id': '3c1012ab3d2b4a4bb86819192819e06a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.332 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.333 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.333 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e0db7823-ca3b-4137-9649-d445b7e9ea1e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.333837', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd7c76e0-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': '5940d431c6fd92047d75a36a6f4bc6315ccf026240e3af66e19cedb48459a636'}]}, 'timestamp': '2026-02-01 09:53:04.334213', '_unique_id': '5b7fb62455e943438d6f132966b518b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.334 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.335 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.335 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f6fa4010-4c43-4638-ba12-2ce79951d57e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.335610', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd7cbc90-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': '88b8ae1f0528a561f7397a953ba85608385af3d90338f2107b2e19e0be731814'}]}, 'timestamp': '2026-02-01 09:53:04.335942', '_unique_id': '5d26c13140124ebcbe15db2054ac24c9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.336 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.337 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.337 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.337 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f9c10bc0-926e-49a3-b353-1a89443e4504', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 760, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-vda', 'timestamp': '2026-02-01T09:53:04.337364', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'cd7d0056-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '48d4702e892061bef399841a36a879d962949411b0f22afcb729e0fa8b7f5401'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-sda', 'timestamp': '2026-02-01T09:53:04.337364', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'cd7d0d4e-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '9146fce042392c4d6ef5770c1dfc49618b5a21fc13a1d265ebc85213f44b9ba4'}]}, 'timestamp': '2026-02-01 09:53:04.337995', '_unique_id': '9d3b0ca748874959a390ba0502b57f66'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.338 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.339 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.339 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.read.latency volume: 1011941075 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.339 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.read.latency volume: 1643661 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '10a31374-40e7-4515-ab58-4828367536e1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1011941075, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-vda', 'timestamp': '2026-02-01T09:53:04.339459', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'cd7d520e-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '67a45de5643a4d032cdabc649707393b37ee965de51a2179ace48941653baf43'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1643661, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-sda', 'timestamp': '2026-02-01T09:53:04.339459', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'cd7d5dee-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.340424279, 'message_signature': '4197c0ad6419b387260c237ce80dc603d4ec07bc04c9ff37ff76846fd13fa1eb'}]}, 'timestamp': '2026-02-01 09:53:04.340056', '_unique_id': '1dfe80af780940999750529660a03cbe'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.340 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.341 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.341 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.341 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.341 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.341 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e5b2f498-73e9-4d3a-ace2-6f8d671a2440', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'instance-00000008-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-tap3c861704-c5', 'timestamp': '2026-02-01T09:53:04.341967', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'tap3c861704-c5', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:c4:5a:4a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap3c861704-c5'}, 'message_id': 'cd7db442-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.391224413, 'message_signature': 'fe3b79d8a8843a52ec350807bc83903fb66ae183d460dfcde6d397336fe12c64'}]}, 'timestamp': '2026-02-01 09:53:04.342301', '_unique_id': '7a1bf14e883c4540b221c39a7319422d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.342 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.343 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.343 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.343 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.344 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.344 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.344 12 DEBUG ceilometer.compute.pollsters [-] aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '731498f3-34e3-4a5f-b3be-b2df51e784d3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-vda', 'timestamp': '2026-02-01T09:53:04.344243', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'cd7e0ed8-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.446037592, 'message_signature': 'ca658ef0dba6a457ab19454ebcca764d72eaac3006cb7ebfaeed84cea4bb6724'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': '0416f10a8d4f4da2a6dc6cbd271a3010', 'user_name': None, 'project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'project_name': None, 'resource_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-sda', 'timestamp': '2026-02-01T09:53:04.344243', 'resource_metadata': {'display_name': 'tempest-LiveMigrationTest-server-1216472824', 'name': 'instance-00000008', 'instance_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'instance_type': 'm1.nano', 'host': '8174b57508600476c08099301c2932d0aaf5f05709fcea5b89529be1', 'instance_host': 'np0005604215.localdomain', 'flavor': {'id': '04b6d75f-0335-413a-b9d6-dfe49d77feaf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}, 'image_ref': 'a223c2d3-3df7-4d82-921c-31ace200d43c', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'cd7e19e6-ff53-11f0-b4d6-fa163ed0c8c4', 'monotonic_time': 11668.446037592, 'message_signature': 'b9467638dba4edf3ef018b30aa0c835e2b83837f75a8ed7149952192ea4c0bc4'}]}, 'timestamp': '2026-02-01 09:53:04.344868', '_unique_id': 'b5bfa140c0264b85b835764c2e27a09e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging yield Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging conn.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Feb 1 04:53:04 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:53:04.346 12 ERROR oslo_messaging.notify.messaging Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.350 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.353 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.377 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m Feb 1 04:53:04 localhost kernel: device tap3c861704-c5 left promiscuous mode Feb 1 04:53:04 localhost NetworkManager[5972]: [1769939584.4831] device (tap3c861704-c5): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.493 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00096|binding|INFO|Releasing lport 3c861704-c594-42f8-a5b3-a274ec84650f from this chassis (sb_readonly=0) Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00097|binding|INFO|Setting lport 3c861704-c594-42f8-a5b3-a274ec84650f down in Southbound Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00098|binding|INFO|Releasing lport 9adda630-e8be-4f28-9d6e-88decd53d5c0 from this chassis (sb_readonly=0) Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00099|binding|INFO|Setting lport 9adda630-e8be-4f28-9d6e-88decd53d5c0 down in Southbound Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00100|binding|INFO|Removing iface tap3c861704-c5 ovn-installed in OVS Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00101|binding|INFO|Releasing lport 2795e61c-14bf-4981-8534-106e0ef1f6ea from this chassis (sb_readonly=0) Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00102|binding|INFO|Releasing lport 82d12955-5666-45d9-bcd4-64e768a2aca1 from this chassis (sb_readonly=0) Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.503 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:8a:c3 19.80.0.117'], port_security=['fa:16:3e:87:8a:c3 19.80.0.117'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['3c861704-c594-42f8-a5b3-a274ec84650f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-599288938', 'neutron:cidrs': '19.80.0.117/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f10af3d7-b861-4585-95de-68162ae73827', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-599288938', 'neutron:project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'neutron:revision_number': '3', 'neutron:security_group_ids': '3c3daae5-f0f3-42a8-b893-8c534dcb0055', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=13e91b2c-4ccc-47a7-a97e-5773902dea41, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=9adda630-e8be-4f28-9d6e-88decd53d5c0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.507 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5a:4a 10.100.0.12'], port_security=['fa:16:3e:c4:5a:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain,np0005604213.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': '6d5b1744-6b18-45d1-b363-5f956c1e98d7'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1236294281', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1236294281', 'neutron:project_id': 'd8e4b0fb12f14fbaa248291aa43aacee', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3c3daae5-f0f3-42a8-b893-8c534dcb0055', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49493626-0ffa-4ff3-a83b-4e74511666de, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=3c861704-c594-42f8-a5b3-a274ec84650f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.508 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00103|ovn_bfd|INFO|Disabled BFD on interface ovn-2186fb-0 Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00104|ovn_bfd|INFO|Disabled BFD on interface ovn-e1cc33-0 Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00105|ovn_bfd|INFO|Disabled BFD on interface ovn-45aa31-0 Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.514 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.509 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9adda630-e8be-4f28-9d6e-88decd53d5c0 in datapath f10af3d7-b861-4585-95de-68162ae73827 unbound from our chassis#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.512 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f10af3d7-b861-4585-95de-68162ae73827, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.512 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[60ece4d5-04a0-460e-9d05-77a9017aab43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.513 158655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f10af3d7-b861-4585-95de-68162ae73827 namespace which is not needed anymore#033[00m Feb 1 04:53:04 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Deactivated successfully. Feb 1 04:53:04 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Consumed 8.380s CPU time. Feb 1 04:53:04 localhost systemd-machined[202466]: Machine qemu-2-instance-00000008 terminated. Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.538 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00106|binding|INFO|Releasing lport 2795e61c-14bf-4981-8534-106e0ef1f6ea from this chassis (sb_readonly=0) Feb 1 04:53:04 localhost ovn_controller[152787]: 2026-02-01T09:53:04Z|00107|binding|INFO|Releasing lport 82d12955-5666-45d9-bcd4-64e768a2aca1 from this chassis (sb_readonly=0) Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.542 274321 DEBUG nova.network.neutron [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Updated VIF entry in instance network info cache for port 3c861704-c594-42f8-a5b3-a274ec84650f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.542 274321 DEBUG nova.network.neutron [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Updating instance_info_cache with network_info: [{"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005604213.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.544 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.566 274321 DEBUG oslo_concurrency.lockutils [req-2041bad4-b838-4660-9c17-42fd2a1f7647 req-0927f14c-811a-4976-9a22-ff204b93c533 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Releasing lock "refresh_cache-aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:53:04 localhost journal[224673]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk: No such file or directory Feb 1 04:53:04 localhost journal[224673]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_disk: No such file or directory Feb 1 04:53:04 localhost NetworkManager[5972]: [1769939584.6519] manager: (tap3c861704-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/23) Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.665 274321 DEBUG nova.virt.libvirt.guest [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.666 274321 INFO nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Migration operation has completed#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.666 274321 INFO nova.compute.manager [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] _post_live_migration() is started..#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.677 274321 DEBUG nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.678 274321 DEBUG nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.678 274321 DEBUG nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m Feb 1 04:53:04 localhost neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827[305437]: [NOTICE] (305441) : haproxy version is 2.8.14-c23fe91 Feb 1 04:53:04 localhost neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827[305437]: [NOTICE] (305441) : path to executable is /usr/sbin/haproxy Feb 1 04:53:04 localhost neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827[305437]: [WARNING] (305441) : Exiting Master process... Feb 1 04:53:04 localhost neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827[305437]: [ALERT] (305441) : Current worker (305443) exited with code 143 (Terminated) Feb 1 04:53:04 localhost neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827[305437]: [WARNING] (305441) : All workers exited. Exiting... (0) Feb 1 04:53:04 localhost systemd[1]: libpod-18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd.scope: Deactivated successfully. Feb 1 04:53:04 localhost podman[305755]: 2026-02-01 09:53:04.701739132 +0000 UTC m=+0.073020887 container died 18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:53:04 localhost podman[305755]: 2026-02-01 09:53:04.743978778 +0000 UTC m=+0.115260553 container cleanup 18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:53:04 localhost systemd[1]: var-lib-containers-storage-overlay-19e3833c7d8d2b2c1fabb013d6f217a0b7dde45ed475f41dc07e52f74eb93e56-merged.mount: Deactivated successfully. Feb 1 04:53:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd-userdata-shm.mount: Deactivated successfully. Feb 1 04:53:04 localhost systemd[1]: var-lib-containers-storage-overlay-68b1613e4d327631643e09ac0edff96b26245deaec5c43c3419a3ce4c98fd9cd-merged.mount: Deactivated successfully. Feb 1 04:53:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8b5538e4c67fb653899080c7d2e2ae7ee1acaa153b2d1f9c71087a7568aa551-userdata-shm.mount: Deactivated successfully. Feb 1 04:53:04 localhost systemd[1]: run-netns-qdhcp\x2dfdca6946\x2d14e8\x2d4692\x2d9d79\x2d41002e703846.mount: Deactivated successfully. Feb 1 04:53:04 localhost podman[305777]: 2026-02-01 09:53:04.821940897 +0000 UTC m=+0.119525786 container cleanup 18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:53:04 localhost systemd[1]: libpod-conmon-18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd.scope: Deactivated successfully. Feb 1 04:53:04 localhost podman[305792]: 2026-02-01 09:53:04.89258686 +0000 UTC m=+0.125393259 container remove 18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.897 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[144cbff2-6f36-4e85-9b83-55f704daa563]: (4, ('Sun Feb 1 09:53:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827 (18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd)\n18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd\nSun Feb 1 09:53:04 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f10af3d7-b861-4585-95de-68162ae73827 (18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd)\n18dc1643a3318e3bd3500e8b99d71786f9c3ff5685f82f7ae225b36ee835fddd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.900 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ab76b243-6c81-4d15-8c53-0008b5396b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.901 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf10af3d7-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.903 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:04 localhost kernel: device tapf10af3d7-b0 left promiscuous mode Feb 1 04:53:04 localhost nova_compute[274317]: 2026-02-01 09:53:04.915 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.918 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[58bdf574-9076-4ae7-bb75-60cf1b6c87f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.936 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[078ed8d1-b5a2-4bb7-abfd-e04a16411659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.937 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[157c70f0-7414-4069-bcab-f2af33925df2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.958 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[34f3ad12-693d-4802-ae76-5bb6436da710]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1166015, 'reachable_time': 16855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305814, 'error': None, 'target': 'ovnmeta-f10af3d7-b861-4585-95de-68162ae73827', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.961 158836 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f10af3d7-b861-4585-95de-68162ae73827 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.961 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3de44b-6d39-435f-b73f-3baa424b4c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:04 localhost systemd[1]: run-netns-ovnmeta\x2df10af3d7\x2db861\x2d4585\x2d95de\x2d68162ae73827.mount: Deactivated successfully. Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.963 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 3c861704-c594-42f8-a5b3-a274ec84650f in datapath 9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8 unbound from our chassis#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.966 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.967 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[c9390e4d-6dce-4812-90fc-a573d72760c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:04.967 158655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8 namespace which is not needed anymore#033[00m Feb 1 04:53:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:05 localhost neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8[305550]: [NOTICE] (305555) : haproxy version is 2.8.14-c23fe91 Feb 1 04:53:05 localhost neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8[305550]: [NOTICE] (305555) : path to executable is /usr/sbin/haproxy Feb 1 04:53:05 localhost neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8[305550]: [WARNING] (305555) : Exiting Master process... Feb 1 04:53:05 localhost neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8[305550]: [ALERT] (305555) : Current worker (305557) exited with code 143 (Terminated) Feb 1 04:53:05 localhost neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8[305550]: [WARNING] (305555) : All workers exited. Exiting... (0) Feb 1 04:53:05 localhost systemd[1]: libpod-20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac.scope: Deactivated successfully. Feb 1 04:53:05 localhost podman[305831]: 2026-02-01 09:53:05.158096646 +0000 UTC m=+0.074753141 container died 20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.209 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:05 localhost podman[305831]: 2026-02-01 09:53:05.230331528 +0000 UTC m=+0.146988013 container cleanup 20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:53:05 localhost podman[305843]: 2026-02-01 09:53:05.259978782 +0000 UTC m=+0.095241830 container cleanup 20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:53:05 localhost systemd[1]: libpod-conmon-20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac.scope: Deactivated successfully. Feb 1 04:53:05 localhost podman[305858]: 2026-02-01 09:53:05.323038007 +0000 UTC m=+0.075476593 container remove 20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 1 04:53:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:05.327 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[22a65303-60ee-42e1-bd7f-3b23abced818]: (4, ('Sun Feb 1 09:53:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8 (20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac)\n20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac\nSun Feb 1 09:53:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8 (20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac)\n20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:05.329 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c18e24-98bc-4b0d-8954-f0df45b92103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:05.331 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9acb9cb3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.333 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:05 localhost kernel: device tap9acb9cb3-f0 left promiscuous mode Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.346 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:05.353 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[01d31c1c-c913-4498-ab70-ee92f524e399]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:05.371 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ca80c0b1-0045-4b05-8523-1849d1c528f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:05.373 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e5992d-3c9d-460d-a759-e29afe00c6ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:05.391 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[c28f1971-825a-4bf0-9036-198a5099474d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1166101, 'reachable_time': 40281, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305882, 'error': None, 'target': 'ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:05.393 158836 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 1 04:53:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:05.393 158836 DEBUG oslo.privsep.daemon [-] privsep: reply[868bf49f-7f2d-44ba-9dfd-4311c0b04256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e103 e103: 6 total, 6 up, 6 in Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.517 274321 DEBUG nova.network.neutron [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Activated binding for port 3c861704-c594-42f8-a5b3-a274ec84650f and host np0005604213.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.518 274321 DEBUG nova.compute.manager [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.519 274321 DEBUG nova.virt.libvirt.vif [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-01T09:52:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1216472824',display_name='tempest-LiveMigrationTest-server-1216472824',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005604215.localdomain',hostname='tempest-livemigrationtest-server-1216472824',id=8,image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-01T09:52:56Z,launched_on='np0005604215.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005604215.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d8e4b0fb12f14fbaa248291aa43aacee',ramdisk_id='',reservation_id='r-w7wsdj02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a223c2d3-3df7-4d82-921c-31ace200d43c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-266774784',owner_user_name='tempest-LiveMigrationTest-266774784-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2026-02-01T09:52:59Z,user_data=None,user_id='0416f10a8d4f4da2a6dc6cbd271a3010',uuid=aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.520 274321 DEBUG nova.network.os_vif_util [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Converting VIF {"id": "3c861704-c594-42f8-a5b3-a274ec84650f", "address": "fa:16:3e:c4:5a:4a", "network": {"id": "9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1381927866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d8e4b0fb12f14fbaa248291aa43aacee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c861704-c5", "ovs_interfaceid": "3c861704-c594-42f8-a5b3-a274ec84650f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.521 274321 DEBUG nova.network.os_vif_util [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5a:4a,bridge_name='br-int',has_traffic_filtering=True,id=3c861704-c594-42f8-a5b3-a274ec84650f,network=Network(9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3c861704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.522 274321 DEBUG os_vif [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5a:4a,bridge_name='br-int',has_traffic_filtering=True,id=3c861704-c594-42f8-a5b3-a274ec84650f,network=Network(9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3c861704-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.525 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.525 274321 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c861704-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.527 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.529 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.533 274321 INFO os_vif [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5a:4a,bridge_name='br-int',has_traffic_filtering=True,id=3c861704-c594-42f8-a5b3-a274ec84650f,network=Network(9acb9cb3-fbe8-4ec2-bc71-dc5c4af33bf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3c861704-c5')#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.533 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.534 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.534 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.535 274321 DEBUG nova.compute.manager [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.535 274321 INFO nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Deleting instance files /var/lib/nova/instances/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_del#033[00m Feb 1 04:53:05 localhost nova_compute[274317]: 2026-02-01 09:53:05.536 274321 INFO nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Deletion of /var/lib/nova/instances/aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469_del complete#033[00m Feb 1 04:53:05 localhost systemd[1]: var-lib-containers-storage-overlay-39afd0bb396a392dfd50d36fe6caf2b1c9a1e9797d65ee8ff3803b1095d1a5f1-merged.mount: Deactivated successfully. Feb 1 04:53:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20b10dafd04dcdac1f900a4509924987511c6ee1e0225d7a81833bc4be8c96ac-userdata-shm.mount: Deactivated successfully. Feb 1 04:53:05 localhost systemd[1]: run-netns-ovnmeta\x2d9acb9cb3\x2dfbe8\x2d4ec2\x2dbc71\x2ddc5c4af33bf8.mount: Deactivated successfully. Feb 1 04:53:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 273 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 43 KiB/s wr, 129 op/s Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.181 274321 DEBUG nova.compute.manager [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-unplugged-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.182 274321 DEBUG oslo_concurrency.lockutils [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.182 274321 DEBUG oslo_concurrency.lockutils [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.182 274321 DEBUG oslo_concurrency.lockutils [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.183 274321 DEBUG nova.compute.manager [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] No waiting events found dispatching network-vif-unplugged-3c861704-c594-42f8-a5b3-a274ec84650f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.183 274321 DEBUG nova.compute.manager [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-unplugged-3c861704-c594-42f8-a5b3-a274ec84650f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.183 274321 DEBUG nova.compute.manager [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.183 274321 DEBUG oslo_concurrency.lockutils [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.184 274321 DEBUG oslo_concurrency.lockutils [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.184 274321 DEBUG oslo_concurrency.lockutils [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.184 274321 DEBUG nova.compute.manager [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] No waiting events found dispatching network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.185 274321 WARNING nova.compute.manager [req-f416e53e-abb5-42e1-b255-6e1114a6396e req-bd7fedae-6685-4b41-a9d8-b2bfe1575530 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received unexpected event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f for instance with vm_state active and task_state migrating.#033[00m Feb 1 04:53:06 localhost nova_compute[274317]: 2026-02-01 09:53:06.335 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:06 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e104 e104: 6 total, 6 up, 6 in Feb 1 04:53:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 375 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 403 op/s Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.221 274321 DEBUG nova.compute.manager [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.221 274321 DEBUG oslo_concurrency.lockutils [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.222 274321 DEBUG oslo_concurrency.lockutils [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.223 274321 DEBUG oslo_concurrency.lockutils [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.223 274321 DEBUG nova.compute.manager [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] No waiting events found dispatching network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.223 274321 WARNING nova.compute.manager [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received unexpected event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f for instance with vm_state active and task_state migrating.#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.224 274321 DEBUG nova.compute.manager [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.224 274321 DEBUG oslo_concurrency.lockutils [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.225 274321 DEBUG oslo_concurrency.lockutils [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.225 274321 DEBUG oslo_concurrency.lockutils [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.225 274321 DEBUG nova.compute.manager [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] No waiting events found dispatching network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.225 274321 WARNING nova.compute.manager [req-4c59c29d-830c-40ee-b3c6-7698c99c415d req-e10655f7-fe7e-4d0d-ada1-5bbbda1749a5 366b10c1124b4cc182e6512cf437f582 8c7e182e9edd4a9496010d2b1c99e9ab - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Received unexpected event network-vif-plugged-3c861704-c594-42f8-a5b3-a274ec84650f for instance with vm_state active and task_state migrating.#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.905 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Acquiring lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.906 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.906 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Lock "aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.929 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.929 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.930 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.930 274321 DEBUG nova.compute.resource_tracker [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:53:08 localhost nova_compute[274317]: 2026-02-01 09:53:08.931 274321 DEBUG oslo_concurrency.processutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:09 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:53:09 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/452746631' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.375 274321 DEBUG oslo_concurrency.processutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.585 274321 WARNING nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.586 274321 DEBUG nova.compute.resource_tracker [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11703MB free_disk=41.567874908447266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.587 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.587 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.626 274321 DEBUG nova.compute.resource_tracker [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Migration for instance aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.648 274321 DEBUG nova.compute.resource_tracker [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.672 274321 DEBUG nova.compute.resource_tracker [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Migration ea09c78d-8a1e-497d-978c-c737a6e34821 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.726 274321 DEBUG nova.compute.resource_tracker [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Instance 4239e79f-2907-476f-baff-d30c06ed6f5f has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.727 274321 DEBUG nova.compute.resource_tracker [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.727 274321 DEBUG nova.compute.resource_tracker [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.760 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Acquiring lock "4239e79f-2907-476f-baff-d30c06ed6f5f" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.760 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lock "4239e79f-2907-476f-baff-d30c06ed6f5f" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.761 274321 INFO nova.compute.manager [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Unshelving#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.806 274321 DEBUG oslo_concurrency.processutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:09 localhost nova_compute[274317]: 2026-02-01 09:53:09.849 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v131: 177 pgs: 177 active+clean; 375 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.1 MiB/s rd, 10 MiB/s wr, 206 op/s Feb 1 04:53:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:53:10 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/681495947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.246 274321 DEBUG oslo_concurrency.processutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.252 274321 DEBUG nova.compute.provider_tree [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.268 274321 DEBUG nova.scheduler.client.report [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.293 274321 DEBUG nova.compute.resource_tracker [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.293 274321 DEBUG oslo_concurrency.lockutils [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.298 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.301 274321 INFO nova.compute.manager [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Migrating instance to np0005604213.localdomain finished successfully.#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.303 274321 DEBUG nova.objects.instance [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4239e79f-2907-476f-baff-d30c06ed6f5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.323 274321 DEBUG nova.objects.instance [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4239e79f-2907-476f-baff-d30c06ed6f5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.337 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.338 274321 INFO nova.compute.claims [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Claim successful on node np0005604215.localdomain#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.412 274321 INFO nova.scheduler.client.report [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] Deleted allocation for migration ea09c78d-8a1e-497d-978c-c737a6e34821#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.412 274321 DEBUG nova.virt.libvirt.driver [None req-73b7e1e8-a1ec-429c-ad9f-34aa9f07a43a 7818b8c14c694d9c97606ff05af9b8e2 ef9394e0b21548a491d64bf76f5f6368 - - default default] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.450 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.567 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:53:10 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/77892895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.914 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.921 274321 DEBUG nova.compute.provider_tree [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.947 274321 DEBUG nova.scheduler.client.report [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:53:10 localhost nova_compute[274317]: 2026-02-01 09:53:10.981 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.023 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Acquiring lock "refresh_cache-4239e79f-2907-476f-baff-d30c06ed6f5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.024 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Acquired lock "refresh_cache-4239e79f-2907-476f-baff-d30c06ed6f5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.024 274321 DEBUG nova.network.neutron [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.111 274321 DEBUG nova.network.neutron [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.337 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:11 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e105 e105: 6 total, 6 up, 6 in Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.563 274321 DEBUG nova.network.neutron [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.580 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Releasing lock "refresh_cache-4239e79f-2907-476f-baff-d30c06ed6f5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.582 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.582 274321 INFO nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Creating image(s)#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.642 274321 DEBUG nova.storage.rbd_utils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] rbd image 4239e79f-2907-476f-baff-d30c06ed6f5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.648 274321 DEBUG nova.objects.instance [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4239e79f-2907-476f-baff-d30c06ed6f5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.715 274321 DEBUG nova.storage.rbd_utils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] rbd image 4239e79f-2907-476f-baff-d30c06ed6f5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.753 274321 DEBUG nova.storage.rbd_utils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] rbd image 4239e79f-2907-476f-baff-d30c06ed6f5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.758 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Acquiring lock "07cd30132c7ce8edc7b720bc0da60a930c4de600" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.759 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lock "07cd30132c7ce8edc7b720bc0da60a930c4de600" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:11 localhost nova_compute[274317]: 2026-02-01 09:53:11.943 274321 DEBUG nova.virt.libvirt.imagebackend [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Image locations are: [{'url': 'rbd://33fac0b9-80c7-560f-918a-c92d3021ca1e/images/5de7fa57-3d53-423f-a108-b9d18fedfc3f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33fac0b9-80c7-560f-918a-c92d3021ca1e/images/5de7fa57-3d53-423f-a108-b9d18fedfc3f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Feb 1 04:53:12 localhost nova_compute[274317]: 2026-02-01 09:53:12.022 274321 DEBUG nova.virt.libvirt.imagebackend [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Selected location: {'url': 'rbd://33fac0b9-80c7-560f-918a-c92d3021ca1e/images/5de7fa57-3d53-423f-a108-b9d18fedfc3f/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Feb 1 04:53:12 localhost nova_compute[274317]: 2026-02-01 09:53:12.023 274321 DEBUG nova.storage.rbd_utils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] cloning images/5de7fa57-3d53-423f-a108-b9d18fedfc3f@snap to None/4239e79f-2907-476f-baff-d30c06ed6f5f_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Feb 1 04:53:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 177 active+clean; 375 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.1 MiB/s rd, 11 MiB/s wr, 207 op/s Feb 1 04:53:12 localhost nova_compute[274317]: 2026-02-01 09:53:12.199 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lock "07cd30132c7ce8edc7b720bc0da60a930c4de600" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:12 localhost nova_compute[274317]: 2026-02-01 09:53:12.352 274321 DEBUG nova.objects.instance [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lazy-loading 'migration_context' on Instance uuid 4239e79f-2907-476f-baff-d30c06ed6f5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:12 localhost nova_compute[274317]: 2026-02-01 09:53:12.451 274321 DEBUG nova.storage.rbd_utils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] flattening vms/4239e79f-2907-476f-baff-d30c06ed6f5f_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.321 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Image rbd:vms/4239e79f-2907-476f-baff-d30c06ed6f5f_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.322 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.322 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Ensure instance console log exists: /var/lib/nova/instances/4239e79f-2907-476f-baff-d30c06ed6f5f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.322 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.323 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.323 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.324 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-01T09:52:49Z,direct_url=,disk_format='raw',id=5de7fa57-3d53-423f-a108-b9d18fedfc3f,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1815488958-shelved',owner='049ec09f02c049edbfda9ad51af738d7',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2026-02-01T09:53:07Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'device_type': 'disk', 'encrypted': False, 'image_id': 'a223c2d3-3df7-4d82-921c-31ace200d43c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.328 274321 WARNING nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.329 274321 DEBUG nova.virt.libvirt.host [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Searching host: 'np0005604215.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.330 274321 DEBUG nova.virt.libvirt.host [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.331 274321 DEBUG nova.virt.libvirt.host [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Searching host: 'np0005604215.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.332 274321 DEBUG nova.virt.libvirt.host [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.332 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.332 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-01T09:50:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='04b6d75f-0335-413a-b9d6-dfe49d77feaf',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-01T09:52:49Z,direct_url=,disk_format='raw',id=5de7fa57-3d53-423f-a108-b9d18fedfc3f,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1815488958-shelved',owner='049ec09f02c049edbfda9ad51af738d7',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2026-02-01T09:53:07Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.333 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.333 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.333 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.333 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.334 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.334 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.334 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.334 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.334 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.335 274321 DEBUG nova.virt.hardware [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.335 274321 DEBUG nova.objects.instance [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4239e79f-2907-476f-baff-d30c06ed6f5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.384 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:53:13 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:53:13 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3696398923' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.807 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.844 274321 DEBUG nova.storage.rbd_utils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] rbd image 4239e79f-2907-476f-baff-d30c06ed6f5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:53:13 localhost nova_compute[274317]: 2026-02-01 09:53:13.849 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:13 localhost podman[306183]: 2026-02-01 09:53:13.874264367 +0000 UTC m=+0.079851831 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:53:13 localhost systemd[1]: tmp-crun.S3iEuz.mount: Deactivated successfully. Feb 1 04:53:13 localhost podman[306185]: 2026-02-01 09:53:13.948130919 +0000 UTC m=+0.152650929 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:53:13 localhost podman[306185]: 2026-02-01 09:53:13.962653571 +0000 UTC m=+0.167173571 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:53:13 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:53:14 localhost podman[306183]: 2026-02-01 09:53:14.01298666 +0000 UTC m=+0.218574104 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 1 04:53:14 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:53:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 226 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 6.5 MiB/s rd, 9.0 MiB/s wr, 358 op/s Feb 1 04:53:14 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:53:14 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4276625103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.273 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.276 274321 DEBUG nova.objects.instance [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4239e79f-2907-476f-baff-d30c06ed6f5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.316 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] End _get_guest_xml xml= Feb 1 04:53:14 localhost nova_compute[274317]: 4239e79f-2907-476f-baff-d30c06ed6f5f Feb 1 04:53:14 localhost nova_compute[274317]: instance-00000006 Feb 1 04:53:14 localhost nova_compute[274317]: 131072 Feb 1 04:53:14 localhost nova_compute[274317]: 1 Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: tempest-UnshelveToHostMultiNodesTest-server-1815488958 Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:13 Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: 128 Feb 1 04:53:14 localhost nova_compute[274317]: 1 Feb 1 04:53:14 localhost nova_compute[274317]: 0 Feb 1 04:53:14 localhost nova_compute[274317]: 0 Feb 1 04:53:14 localhost nova_compute[274317]: 1 Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: tempest-UnshelveToHostMultiNodesTest-51338059-project-member Feb 1 04:53:14 localhost nova_compute[274317]: tempest-UnshelveToHostMultiNodesTest-51338059 Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: RDO Feb 1 04:53:14 localhost nova_compute[274317]: OpenStack Compute Feb 1 04:53:14 localhost nova_compute[274317]: 27.5.2-0.20260127144738.eaa65f0.el9 Feb 1 04:53:14 localhost nova_compute[274317]: 4239e79f-2907-476f-baff-d30c06ed6f5f Feb 1 04:53:14 localhost nova_compute[274317]: 4239e79f-2907-476f-baff-d30c06ed6f5f Feb 1 04:53:14 localhost nova_compute[274317]: Virtual Machine Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: hvm Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: /dev/urandom Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: Feb 1 04:53:14 localhost nova_compute[274317]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.363 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.364 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.365 274321 INFO nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Using config drive#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.398 274321 DEBUG nova.storage.rbd_utils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] rbd image 4239e79f-2907-476f-baff-d30c06ed6f5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.447 274321 DEBUG nova.objects.instance [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4239e79f-2907-476f-baff-d30c06ed6f5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.474 274321 DEBUG nova.objects.instance [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lazy-loading 'keypairs' on Instance uuid 4239e79f-2907-476f-baff-d30c06ed6f5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.531 274321 INFO nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Creating config drive at /var/lib/nova/instances/4239e79f-2907-476f-baff-d30c06ed6f5f/disk.config#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.536 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4239e79f-2907-476f-baff-d30c06ed6f5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpupdz8vr7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.665 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4239e79f-2907-476f-baff-d30c06ed6f5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpupdz8vr7" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.708 274321 DEBUG nova.storage.rbd_utils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] rbd image 4239e79f-2907-476f-baff-d30c06ed6f5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.714 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4239e79f-2907-476f-baff-d30c06ed6f5f/disk.config 4239e79f-2907-476f-baff-d30c06ed6f5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.929 274321 DEBUG oslo_concurrency.processutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4239e79f-2907-476f-baff-d30c06ed6f5f/disk.config 4239e79f-2907-476f-baff-d30c06ed6f5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:14 localhost nova_compute[274317]: 2026-02-01 09:53:14.931 274321 INFO nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Deleting local config drive /var/lib/nova/instances/4239e79f-2907-476f-baff-d30c06ed6f5f/disk.config because it was imported into RBD.#033[00m Feb 1 04:53:14 localhost systemd-machined[202466]: New machine qemu-3-instance-00000006. Feb 1 04:53:15 localhost systemd[1]: Started Virtual Machine qemu-3-instance-00000006. Feb 1 04:53:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.343 274321 DEBUG nova.virt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.344 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] VM Resumed (Lifecycle Event)#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.347 274321 DEBUG nova.compute.manager [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.348 274321 DEBUG nova.virt.libvirt.driver [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.352 274321 INFO nova.virt.libvirt.driver [-] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Instance spawned successfully.#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.370 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.374 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.398 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.398 274321 DEBUG nova.virt.driver [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.399 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] VM Started (Lifecycle Event)#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.421 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.425 274321 DEBUG nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.449 274321 INFO nova.compute.manager [None req-c47a18c5-3008-48c5-bac9-714d6e200798 - - - - - -] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 1 04:53:15 localhost nova_compute[274317]: 2026-02-01 09:53:15.594 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:16 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:16.056 2 INFO neutron.agent.securitygroups_rpc [None req-fe72f4fd-5cc1-4afa-94a0-35085a503c7b 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Security group member updated ['3c3daae5-f0f3-42a8-b893-8c534dcb0055']#033[00m Feb 1 04:53:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 177 active+clean; 226 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 7.5 MiB/s wr, 297 op/s Feb 1 04:53:16 localhost nova_compute[274317]: 2026-02-01 09:53:16.342 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e106 e106: 6 total, 6 up, 6 in Feb 1 04:53:17 localhost nova_compute[274317]: 2026-02-01 09:53:17.117 274321 DEBUG nova.compute.manager [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:53:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:17.170 2 INFO neutron.agent.securitygroups_rpc [None req-847588ff-1f30-46d7-9f2d-cc2e866fd5e9 0416f10a8d4f4da2a6dc6cbd271a3010 d8e4b0fb12f14fbaa248291aa43aacee - - default default] Security group member updated ['3c3daae5-f0f3-42a8-b893-8c534dcb0055']#033[00m Feb 1 04:53:17 localhost nova_compute[274317]: 2026-02-01 09:53:17.191 274321 DEBUG oslo_concurrency.lockutils [None req-18be2f1f-a66a-4029-b12b-e9a556c8f79e ade63c676767402eb16f3f5df77b141e c2c1f738d9b04a26b94dfcbe1966af64 - - default default] Lock "4239e79f-2907-476f-baff-d30c06ed6f5f" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 7.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 177 active+clean; 226 MiB data, 951 MiB used, 41 GiB / 42 GiB avail; 9.3 MiB/s rd, 6.1 MiB/s wr, 404 op/s Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.184 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Acquiring lock "4239e79f-2907-476f-baff-d30c06ed6f5f" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.185 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Lock "4239e79f-2907-476f-baff-d30c06ed6f5f" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.185 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Acquiring lock "4239e79f-2907-476f-baff-d30c06ed6f5f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.186 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Lock "4239e79f-2907-476f-baff-d30c06ed6f5f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.186 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Lock "4239e79f-2907-476f-baff-d30c06ed6f5f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.189 274321 INFO nova.compute.manager [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Terminating instance#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.190 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Acquiring lock "refresh_cache-4239e79f-2907-476f-baff-d30c06ed6f5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.190 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Acquired lock "refresh_cache-4239e79f-2907-476f-baff-d30c06ed6f5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.191 274321 DEBUG nova.network.neutron [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.396 274321 DEBUG nova.network.neutron [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.653 274321 DEBUG nova.network.neutron [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.723 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Releasing lock "refresh_cache-4239e79f-2907-476f-baff-d30c06ed6f5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.724 274321 DEBUG nova.compute.manager [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Feb 1 04:53:18 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully. Feb 1 04:53:18 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 3.903s CPU time. Feb 1 04:53:18 localhost systemd-machined[202466]: Machine qemu-3-instance-00000006 terminated. Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.952 274321 INFO nova.virt.libvirt.driver [-] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Instance destroyed successfully.#033[00m Feb 1 04:53:18 localhost nova_compute[274317]: 2026-02-01 09:53:18.953 274321 DEBUG nova.objects.instance [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Lazy-loading 'resources' on Instance uuid 4239e79f-2907-476f-baff-d30c06ed6f5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.666 274321 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.666 274321 INFO nova.compute.manager [-] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] VM Stopped (Lifecycle Event)#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.685 274321 DEBUG nova.compute.manager [None req-1fe7d5a0-e11e-4a4f-92c2-5e487da5699f - - - - - -] [instance: aa6bae02-2ed3-49c7-9c3d-2e8d69c1b469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.729 274321 INFO nova.virt.libvirt.driver [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Deleting instance files /var/lib/nova/instances/4239e79f-2907-476f-baff-d30c06ed6f5f_del#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.730 274321 INFO nova.virt.libvirt.driver [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Deletion of /var/lib/nova/instances/4239e79f-2907-476f-baff-d30c06ed6f5f_del complete#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.775 274321 INFO nova.compute.manager [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.776 274321 DEBUG oslo.service.loopingcall [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.776 274321 DEBUG nova.compute.manager [-] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.777 274321 DEBUG nova.network.neutron [-] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Feb 1 04:53:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:53:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.867 274321 DEBUG nova.network.neutron [-] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 1 04:53:19 localhost podman[306410]: 2026-02-01 09:53:19.874190401 +0000 UTC m=+0.082379629 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.7, architecture=x86_64, vcs-type=git, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z) Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.890 274321 DEBUG nova.network.neutron [-] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 1 04:53:19 localhost podman[306410]: 2026-02-01 09:53:19.891636665 +0000 UTC m=+0.099825883 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1769056855, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, maintainer=Red Hat, Inc.) Feb 1 04:53:19 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.911 274321 INFO nova.compute.manager [-] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Took 0.13 seconds to deallocate network for instance.#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.953 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:19 localhost nova_compute[274317]: 2026-02-01 09:53:19.954 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:19 localhost podman[306411]: 2026-02-01 09:53:19.997433963 +0000 UTC m=+0.200921294 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent) Feb 1 04:53:20 localhost nova_compute[274317]: 2026-02-01 09:53:20.016 274321 DEBUG oslo_concurrency.processutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:20 localhost podman[306411]: 2026-02-01 09:53:20.032706302 +0000 UTC m=+0.236193703 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:53:20 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:53:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 226 MiB data, 951 MiB used, 41 GiB / 42 GiB avail; 8.5 MiB/s rd, 5.6 MiB/s wr, 371 op/s Feb 1 04:53:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:53:20 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1092996455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:53:20 localhost nova_compute[274317]: 2026-02-01 09:53:20.506 274321 DEBUG oslo_concurrency.processutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:20 localhost nova_compute[274317]: 2026-02-01 09:53:20.513 274321 DEBUG nova.compute.provider_tree [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:53:20 localhost nova_compute[274317]: 2026-02-01 09:53:20.532 274321 DEBUG nova.scheduler.client.report [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:53:20 localhost nova_compute[274317]: 2026-02-01 09:53:20.555 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:20 localhost nova_compute[274317]: 2026-02-01 09:53:20.581 274321 INFO nova.scheduler.client.report [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Deleted allocations for instance 4239e79f-2907-476f-baff-d30c06ed6f5f#033[00m Feb 1 04:53:20 localhost nova_compute[274317]: 2026-02-01 09:53:20.640 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:20 localhost nova_compute[274317]: 2026-02-01 09:53:20.713 274321 DEBUG oslo_concurrency.lockutils [None req-9d8f1100-e1dd-4ab5-9e1c-f13fdf5e3656 2d1e212774fc48c5970abb8787ca767f 049ec09f02c049edbfda9ad51af738d7 - - default default] Lock "4239e79f-2907-476f-baff-d30c06ed6f5f" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:53:21 Feb 1 04:53:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:53:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:53:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['manila_metadata', 'manila_data', '.mgr', 'backups', 'vms', 'images', 'volumes'] Feb 1 04:53:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:53:21 localhost nova_compute[274317]: 2026-02-01 09:53:21.344 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:53:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:53:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e107 e107: 6 total, 6 up, 6 in Feb 1 04:53:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:53:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006578574295086544 of space, bias 1.0, pg target 1.315714859017309 quantized to 32 (current 32) Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:53:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.0021628687418574354 quantized to 16 (current 16) Feb 1 04:53:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:53:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:53:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:53:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 226 MiB data, 951 MiB used, 41 GiB / 42 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 218 op/s Feb 1 04:53:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 259 op/s Feb 1 04:53:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:25 localhost nova_compute[274317]: 2026-02-01 09:53:25.669 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 7.2 MiB/s rd, 4.8 MiB/s wr, 213 op/s Feb 1 04:53:26 localhost nova_compute[274317]: 2026-02-01 09:53:26.347 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:53:26 localhost systemd[1]: tmp-crun.okSPaM.mount: Deactivated successfully. Feb 1 04:53:26 localhost podman[306470]: 2026-02-01 09:53:26.877609604 +0000 UTC m=+0.089185361 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true) Feb 1 04:53:26 localhost podman[306470]: 2026-02-01 09:53:26.916711973 +0000 UTC m=+0.128287680 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:53:26 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:53:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:27.817 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:53:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:27.818 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:53:27 localhost nova_compute[274317]: 2026-02-01 09:53:27.819 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s Feb 1 04:53:28 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:28.821 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:53:30 localhost podman[236852]: time="2026-02-01T09:53:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:53:30 localhost podman[236852]: @ - - [01/Feb/2026:09:53:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 04:53:30 localhost podman[236852]: @ - - [01/Feb/2026:09:53:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18295 "" "Go-http-client/1.1" Feb 1 04:53:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s Feb 1 04:53:30 localhost nova_compute[274317]: 2026-02-01 09:53:30.702 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:31 localhost nova_compute[274317]: 2026-02-01 09:53:31.350 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e108 e108: 6 total, 6 up, 6 in Feb 1 04:53:31 localhost openstack_network_exporter[239388]: ERROR 09:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:53:31 localhost openstack_network_exporter[239388]: Feb 1 04:53:31 localhost openstack_network_exporter[239388]: ERROR 09:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:53:31 localhost openstack_network_exporter[239388]: Feb 1 04:53:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:53:31 localhost podman[306489]: 2026-02-01 09:53:31.666325683 +0000 UTC m=+0.078585690 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:53:31 localhost podman[306489]: 2026-02-01 09:53:31.67842243 +0000 UTC m=+0.090682447 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:53:31 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:53:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s Feb 1 04:53:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:53:33 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:53:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:53:33 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:53:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:53:33 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 7815552d-7e18-42d6-bd21-6acc14c1a76e (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:53:33 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 7815552d-7e18-42d6-bd21-6acc14c1a76e (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:53:33 localhost ceph-mgr[278126]: [progress INFO root] Completed event 7815552d-7e18-42d6-bd21-6acc14c1a76e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:53:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:53:33 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:53:33 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:53:33 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:53:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e109 e109: 6 total, 6 up, 6 in Feb 1 04:53:33 localhost nova_compute[274317]: 2026-02-01 09:53:33.950 274321 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 1 04:53:33 localhost nova_compute[274317]: 2026-02-01 09:53:33.951 274321 INFO nova.compute.manager [-] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] VM Stopped (Lifecycle Event)#033[00m Feb 1 04:53:33 localhost nova_compute[274317]: 2026-02-01 09:53:33.976 274321 DEBUG nova.compute.manager [None req-7e9157d0-d47c-4b39-aa9d-529866017e1a - - - - - -] [instance: 4239e79f-2907-476f-baff-d30c06ed6f5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 1 04:53:34 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:34.098 2 INFO neutron.agent.securitygroups_rpc [req-75bc0aa1-37ef-492b-a96a-ae9080ee75e0 req-edda2748-5bcc-42c3-8a62-8fe3b52553b6 dca8da9c475e44f19383733eded7ebf5 e44a50a3d96541748629cacff5ef78b0 - - default default] Security group rule updated ['f0c61cda-1998-487f-b5b2-ae9c4848f56a']#033[00m Feb 1 04:53:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 145 MiB data, 746 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.1 KiB/s wr, 18 op/s Feb 1 04:53:34 localhost snmpd[67757]: empty variable list in _query Feb 1 04:53:34 localhost snmpd[67757]: empty variable list in _query Feb 1 04:53:34 localhost snmpd[67757]: empty variable list in _query Feb 1 04:53:34 localhost snmpd[67757]: empty variable list in _query Feb 1 04:53:34 localhost snmpd[67757]: empty variable list in _query Feb 1 04:53:34 localhost snmpd[67757]: empty variable list in _query Feb 1 04:53:34 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:34.836 2 INFO neutron.agent.securitygroups_rpc [req-82289f6f-42d9-438b-b261-05589eed2efe req-98f019aa-b49d-4aef-94dc-ee96ed3719e9 dca8da9c475e44f19383733eded7ebf5 e44a50a3d96541748629cacff5ef78b0 - - default default] Security group rule updated ['ada4c3f2-cdfe-4dd3-85f7-4e743664f11d']#033[00m Feb 1 04:53:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:35 localhost nova_compute[274317]: 2026-02-01 09:53:35.733 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:35 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:35.788 2 INFO neutron.agent.securitygroups_rpc [req-91571c20-84f3-4df1-9546-4b115d3d0f93 req-0f489c88-5b53-4bd7-842f-0fffa7ebc222 dca8da9c475e44f19383733eded7ebf5 e44a50a3d96541748629cacff5ef78b0 - - default default] Security group rule updated ['879f68ae-8832-4697-b764-9db0f8c3108c']#033[00m Feb 1 04:53:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 145 MiB data, 746 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.1 KiB/s wr, 18 op/s Feb 1 04:53:36 localhost nova_compute[274317]: 2026-02-01 09:53:36.352 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:36 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:53:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:53:36 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:36.623 2 INFO neutron.agent.securitygroups_rpc [req-876e83f6-6b17-43eb-b040-065589623e5f req-6a5cb928-de96-4098-a70d-23e5abe4d6ce dca8da9c475e44f19383733eded7ebf5 e44a50a3d96541748629cacff5ef78b0 - - default default] Security group rule updated ['98cb19e2-acc2-4297-8b83-10025f09d04b']#033[00m Feb 1 04:53:37 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:53:37 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:37.598 2 INFO neutron.agent.securitygroups_rpc [req-92fefcaa-7c6d-4e1a-a20a-67097b188e7e req-0e7bfb69-2aec-4c7e-b452-953e89b3814f dca8da9c475e44f19383733eded7ebf5 e44a50a3d96541748629cacff5ef78b0 - - default default] Security group rule updated ['374381c7-702b-4257-92ff-7af171862681']#033[00m Feb 1 04:53:37 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:37.768 2 INFO neutron.agent.securitygroups_rpc [req-2c86665d-931d-4ec0-bcba-f7d3083dc82f req-7b6541b5-97df-453e-9e47-7bd01fb85ab0 dca8da9c475e44f19383733eded7ebf5 e44a50a3d96541748629cacff5ef78b0 - - default default] Security group rule updated ['374381c7-702b-4257-92ff-7af171862681']#033[00m Feb 1 04:53:38 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:38.109 2 INFO neutron.agent.securitygroups_rpc [req-eb564804-d725-4326-adb0-58732ff445a4 req-615b10cf-445f-485b-a611-28146c91f72c dca8da9c475e44f19383733eded7ebf5 e44a50a3d96541748629cacff5ef78b0 - - default default] Security group rule updated ['374381c7-702b-4257-92ff-7af171862681']#033[00m Feb 1 04:53:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Feb 1 04:53:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 45 op/s Feb 1 04:53:40 localhost nova_compute[274317]: 2026-02-01 09:53:40.735 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:41 localhost nova_compute[274317]: 2026-02-01 09:53:41.354 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e110 e110: 6 total, 6 up, 6 in Feb 1 04:53:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:41.771 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:41.772 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:41.772 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s Feb 1 04:53:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s Feb 1 04:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:53:44 localhost podman[306596]: 2026-02-01 09:53:44.87314658 +0000 UTC m=+0.084199276 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:53:44 localhost podman[306597]: 2026-02-01 09:53:44.915698477 +0000 UTC m=+0.127064382 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:53:44 localhost podman[306597]: 2026-02-01 09:53:44.928457394 +0000 UTC m=+0.139823309 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:53:44 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:53:44 localhost podman[306596]: 2026-02-01 09:53:44.943737651 +0000 UTC m=+0.154790397 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller) Feb 1 04:53:44 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:53:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:45 localhost nova_compute[274317]: 2026-02-01 09:53:45.737 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:46 localhost nova_compute[274317]: 2026-02-01 09:53:46.102 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:46 localhost nova_compute[274317]: 2026-02-01 09:53:46.103 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:53:46 localhost nova_compute[274317]: 2026-02-01 09:53:46.103 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:53:46 localhost nova_compute[274317]: 2026-02-01 09:53:46.119 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:53:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s Feb 1 04:53:46 localhost nova_compute[274317]: 2026-02-01 09:53:46.356 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 1 04:53:49 localhost nova_compute[274317]: 2026-02-01 09:53:49.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:49 localhost nova_compute[274317]: 2026-02-01 09:53:49.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:50 localhost nova_compute[274317]: 2026-02-01 09:53:50.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 1 04:53:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:53:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:53:50 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:53:50.367 259225 INFO neutron.agent.linux.ip_lib [None req-d99bc822-9c44-49c0-bb45-91a23f333e23 - - - - - -] Device tapf87036fa-d5 cannot be used as it has no MAC address#033[00m Feb 1 04:53:50 localhost systemd[1]: tmp-crun.Xe1ULa.mount: Deactivated successfully. Feb 1 04:53:50 localhost podman[306644]: 2026-02-01 09:53:50.396781238 +0000 UTC m=+0.094109004 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal) Feb 1 04:53:50 localhost nova_compute[274317]: 2026-02-01 09:53:50.403 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:50 localhost kernel: device tapf87036fa-d5 entered promiscuous mode Feb 1 04:53:50 localhost nova_compute[274317]: 2026-02-01 09:53:50.413 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:50 localhost ovn_controller[152787]: 2026-02-01T09:53:50Z|00108|binding|INFO|Claiming lport f87036fa-d537-4b85-b37c-c486487fff03 for this chassis. Feb 1 04:53:50 localhost ovn_controller[152787]: 2026-02-01T09:53:50Z|00109|binding|INFO|f87036fa-d537-4b85-b37c-c486487fff03: Claiming unknown Feb 1 04:53:50 localhost NetworkManager[5972]: [1769939630.4162] manager: (tapf87036fa-d5): new Generic device (/org/freedesktop/NetworkManager/Devices/24) Feb 1 04:53:50 localhost systemd-udevd[306682]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:53:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:50.423 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-64c4abd2-68ab-4da2-b883-4056dccfe81b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c4abd2-68ab-4da2-b883-4056dccfe81b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1713821b0f794e3b830e51e1263a38e8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a20ebc6e-8957-43a9-8b71-59702d481dc9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f87036fa-d537-4b85-b37c-c486487fff03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:53:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:50.425 158655 INFO neutron.agent.ovn.metadata.agent [-] Port f87036fa-d537-4b85-b37c-c486487fff03 in datapath 64c4abd2-68ab-4da2-b883-4056dccfe81b bound to our chassis#033[00m Feb 1 04:53:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:50.426 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 64c4abd2-68ab-4da2-b883-4056dccfe81b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:53:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:50.428 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[07484f54-37ff-4631-b351-2d0a841ee2cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:50 localhost journal[224955]: ethtool ioctl error on tapf87036fa-d5: No such device Feb 1 04:53:50 localhost journal[224955]: ethtool ioctl error on tapf87036fa-d5: No such device Feb 1 04:53:50 localhost ovn_controller[152787]: 2026-02-01T09:53:50Z|00110|binding|INFO|Setting lport f87036fa-d537-4b85-b37c-c486487fff03 ovn-installed in OVS Feb 1 04:53:50 localhost ovn_controller[152787]: 2026-02-01T09:53:50Z|00111|binding|INFO|Setting lport f87036fa-d537-4b85-b37c-c486487fff03 up in Southbound Feb 1 04:53:50 localhost nova_compute[274317]: 2026-02-01 09:53:50.457 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:50 localhost journal[224955]: ethtool ioctl error on tapf87036fa-d5: No such device Feb 1 04:53:50 localhost podman[306646]: 2026-02-01 09:53:50.460480063 +0000 UTC m=+0.154130615 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 1 04:53:50 localhost journal[224955]: ethtool ioctl error on tapf87036fa-d5: No such device Feb 1 04:53:50 localhost podman[306644]: 2026-02-01 09:53:50.465428178 +0000 UTC m=+0.162755944 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.7, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible) Feb 1 04:53:50 localhost journal[224955]: ethtool ioctl error on tapf87036fa-d5: No such device Feb 1 04:53:50 localhost journal[224955]: ethtool ioctl error on tapf87036fa-d5: No such device Feb 1 04:53:50 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:53:50 localhost journal[224955]: ethtool ioctl error on tapf87036fa-d5: No such device Feb 1 04:53:50 localhost journal[224955]: ethtool ioctl error on tapf87036fa-d5: No such device Feb 1 04:53:50 localhost nova_compute[274317]: 2026-02-01 09:53:50.499 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:50 localhost podman[306646]: 2026-02-01 09:53:50.520890086 +0000 UTC m=+0.214540638 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 1 04:53:50 localhost nova_compute[274317]: 2026-02-01 09:53:50.530 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:50 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:53:50 localhost nova_compute[274317]: 2026-02-01 09:53:50.738 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:50.804 2 INFO neutron.agent.securitygroups_rpc [None req-200cf6df-4bba-4fb6-b3b8-7b487bc0871d 3ef0026b934441b28e0635d7a99bc592 d1284af7476748758a037c2a7d34b7a2 - - default default] Security group member updated ['02728618-05ed-4a37-93a2-59fcc09c3239']#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.121 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.121 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.122 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.122 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.122 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:51 localhost podman[306778]: Feb 1 04:53:51 localhost podman[306778]: 2026-02-01 09:53:51.331223545 +0000 UTC m=+0.080505690 container create 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true) Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.359 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:51 localhost podman[306778]: 2026-02-01 09:53:51.287327787 +0000 UTC m=+0.036610002 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:53:51 localhost systemd[1]: Started libpod-conmon-2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b.scope. Feb 1 04:53:51 localhost systemd[1]: Started libcrun container. Feb 1 04:53:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21cdbba4626f33fa36df1df6f6f66a3be4030297ce337d866e792ea89e48cc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:53:51 localhost podman[306778]: 2026-02-01 09:53:51.434827735 +0000 UTC m=+0.184109880 container init 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:53:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:53:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:53:51 localhost podman[306778]: 2026-02-01 09:53:51.45616838 +0000 UTC m=+0.205450525 container start 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true) Feb 1 04:53:51 localhost dnsmasq[306796]: started, version 2.85 cachesize 150 Feb 1 04:53:51 localhost dnsmasq[306796]: DNS service limited to local subnets Feb 1 04:53:51 localhost dnsmasq[306796]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:53:51 localhost dnsmasq[306796]: warning: no upstream servers configured Feb 1 04:53:51 localhost dnsmasq-dhcp[306796]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:53:51 localhost dnsmasq[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/addn_hosts - 0 addresses Feb 1 04:53:51 localhost dnsmasq-dhcp[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/host Feb 1 04:53:51 localhost dnsmasq-dhcp[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/opts Feb 1 04:53:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:53:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:53:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:53:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:53:51 localhost neutron_sriov_agent[252054]: 2026-02-01 09:53:51.499 2 INFO neutron.agent.securitygroups_rpc [None req-07f1805c-f1e5-49eb-9bf2-554d43f01479 3ef0026b934441b28e0635d7a99bc592 d1284af7476748758a037c2a7d34b7a2 - - default default] Security group member updated ['02728618-05ed-4a37-93a2-59fcc09c3239']#033[00m Feb 1 04:53:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:53:51 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/987860528' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.533 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:51 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:53:51.626 259225 INFO neutron.agent.dhcp.agent [None req-c0440d8d-6f1b-4813-bbfd-fb17e4f3bc44 - - - - - -] DHCP configuration for ports {'4042c1e9-20ae-449b-8b73-91339d0f2377'} is completed#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.741 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.742 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11671MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.742 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.743 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.791 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.791 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:53:51 localhost nova_compute[274317]: 2026-02-01 09:53:51.807 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:53:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 1 04:53:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:53:52 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3703738378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:53:52 localhost nova_compute[274317]: 2026-02-01 09:53:52.223 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:53:52 localhost nova_compute[274317]: 2026-02-01 09:53:52.230 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:53:52 localhost nova_compute[274317]: 2026-02-01 09:53:52.251 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:53:52 localhost nova_compute[274317]: 2026-02-01 09:53:52.282 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:53:52 localhost nova_compute[274317]: 2026-02-01 09:53:52.283 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:53:52 localhost systemd[1]: tmp-crun.O9XdB6.mount: Deactivated successfully. Feb 1 04:53:53 localhost nova_compute[274317]: 2026-02-01 09:53:53.281 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:53 localhost nova_compute[274317]: 2026-02-01 09:53:53.302 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:53 localhost nova_compute[274317]: 2026-02-01 09:53:53.303 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:53:53 localhost nova_compute[274317]: 2026-02-01 09:53:53.303 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:53:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e111 e111: 6 total, 6 up, 6 in Feb 1 04:53:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 1 04:53:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:53:54.361 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:53:54Z, description=, device_id=cd47e43f-fc78-414f-aa7f-74876586e763, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a7dd6bf3-338e-4748-ab7d-96e7f31fe4ba, ip_allocation=immediate, mac_address=fa:16:3e:92:32:fc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:53:48Z, description=, dns_domain=, id=64c4abd2-68ab-4da2-b883-4056dccfe81b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-960610517-network, port_security_enabled=True, project_id=1713821b0f794e3b830e51e1263a38e8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34016, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=906, status=ACTIVE, subnets=['d305c769-28bb-47c7-92e1-5f2d5081f6eb'], tags=[], tenant_id=1713821b0f794e3b830e51e1263a38e8, updated_at=2026-02-01T09:53:49Z, vlan_transparent=None, network_id=64c4abd2-68ab-4da2-b883-4056dccfe81b, port_security_enabled=False, project_id=1713821b0f794e3b830e51e1263a38e8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=945, status=DOWN, tags=[], tenant_id=1713821b0f794e3b830e51e1263a38e8, updated_at=2026-02-01T09:53:54Z on network 64c4abd2-68ab-4da2-b883-4056dccfe81b#033[00m Feb 1 04:53:54 localhost dnsmasq[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/addn_hosts - 1 addresses Feb 1 04:53:54 localhost dnsmasq-dhcp[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/host Feb 1 04:53:54 localhost dnsmasq-dhcp[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/opts Feb 1 04:53:54 localhost podman[306837]: 2026-02-01 09:53:54.563480697 +0000 UTC m=+0.059200116 container kill 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 1 04:53:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:53:54.787 259225 INFO neutron.agent.dhcp.agent [None req-a5280b46-cccf-45c4-ad12-83144e7c5d92 - - - - - -] DHCP configuration for ports {'a7dd6bf3-338e-4748-ab7d-96e7f31fe4ba'} is completed#033[00m Feb 1 04:53:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:53:55 localhost nova_compute[274317]: 2026-02-01 09:53:55.740 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:53:55.902 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:53:54Z, description=, device_id=cd47e43f-fc78-414f-aa7f-74876586e763, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a7dd6bf3-338e-4748-ab7d-96e7f31fe4ba, ip_allocation=immediate, mac_address=fa:16:3e:92:32:fc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:53:48Z, description=, dns_domain=, id=64c4abd2-68ab-4da2-b883-4056dccfe81b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-960610517-network, port_security_enabled=True, project_id=1713821b0f794e3b830e51e1263a38e8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34016, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=906, status=ACTIVE, subnets=['d305c769-28bb-47c7-92e1-5f2d5081f6eb'], tags=[], tenant_id=1713821b0f794e3b830e51e1263a38e8, updated_at=2026-02-01T09:53:49Z, vlan_transparent=None, network_id=64c4abd2-68ab-4da2-b883-4056dccfe81b, port_security_enabled=False, project_id=1713821b0f794e3b830e51e1263a38e8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=945, status=DOWN, tags=[], tenant_id=1713821b0f794e3b830e51e1263a38e8, updated_at=2026-02-01T09:53:54Z on network 64c4abd2-68ab-4da2-b883-4056dccfe81b#033[00m Feb 1 04:53:56 localhost systemd[1]: tmp-crun.yWenqp.mount: Deactivated successfully. Feb 1 04:53:56 localhost dnsmasq[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/addn_hosts - 1 addresses Feb 1 04:53:56 localhost dnsmasq-dhcp[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/host Feb 1 04:53:56 localhost dnsmasq-dhcp[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/opts Feb 1 04:53:56 localhost podman[306876]: 2026-02-01 09:53:56.145089978 +0000 UTC m=+0.075113053 container kill 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 1 04:53:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 1 04:53:56 localhost nova_compute[274317]: 2026-02-01 09:53:56.362 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:56 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:53:56.458 259225 INFO neutron.agent.dhcp.agent [None req-5315bbd6-1862-429e-951c-d2c0ebb5080f - - - - - -] DHCP configuration for ports {'a7dd6bf3-338e-4748-ab7d-96e7f31fe4ba'} is completed#033[00m Feb 1 04:53:57 localhost ovn_controller[152787]: 2026-02-01T09:53:57Z|00112|ovn_bfd|INFO|Enabled BFD on interface ovn-2186fb-0 Feb 1 04:53:57 localhost ovn_controller[152787]: 2026-02-01T09:53:57Z|00113|ovn_bfd|INFO|Enabled BFD on interface ovn-e1cc33-0 Feb 1 04:53:57 localhost ovn_controller[152787]: 2026-02-01T09:53:57Z|00114|ovn_bfd|INFO|Enabled BFD on interface ovn-45aa31-0 Feb 1 04:53:57 localhost nova_compute[274317]: 2026-02-01 09:53:57.313 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:57 localhost nova_compute[274317]: 2026-02-01 09:53:57.317 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:57 localhost nova_compute[274317]: 2026-02-01 09:53:57.337 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:57 localhost nova_compute[274317]: 2026-02-01 09:53:57.343 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:57 localhost nova_compute[274317]: 2026-02-01 09:53:57.377 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:53:57 localhost systemd[1]: tmp-crun.qbi4ky.mount: Deactivated successfully. Feb 1 04:53:57 localhost podman[306899]: 2026-02-01 09:53:57.892520757 +0000 UTC m=+0.103296531 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute) Feb 1 04:53:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e112 e112: 6 total, 6 up, 6 in Feb 1 04:53:57 localhost podman[306899]: 2026-02-01 09:53:57.904636344 +0000 UTC m=+0.115412088 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:53:57 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:53:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s Feb 1 04:53:58 localhost nova_compute[274317]: 2026-02-01 09:53:58.286 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:58 localhost dnsmasq[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/addn_hosts - 0 addresses Feb 1 04:53:58 localhost dnsmasq-dhcp[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/host Feb 1 04:53:58 localhost podman[306934]: 2026-02-01 09:53:58.503346577 +0000 UTC m=+0.062337554 container kill 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:53:58 localhost dnsmasq-dhcp[306796]: read /var/lib/neutron/dhcp/64c4abd2-68ab-4da2-b883-4056dccfe81b/opts Feb 1 04:53:58 localhost ovn_controller[152787]: 2026-02-01T09:53:58Z|00115|ovn_bfd|INFO|Disabled BFD on interface ovn-2186fb-0 Feb 1 04:53:58 localhost ovn_controller[152787]: 2026-02-01T09:53:58Z|00116|ovn_bfd|INFO|Disabled BFD on interface ovn-e1cc33-0 Feb 1 04:53:58 localhost ovn_controller[152787]: 2026-02-01T09:53:58Z|00117|ovn_bfd|INFO|Disabled BFD on interface ovn-45aa31-0 Feb 1 04:53:58 localhost nova_compute[274317]: 2026-02-01 09:53:58.566 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:58 localhost nova_compute[274317]: 2026-02-01 09:53:58.569 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:58 localhost nova_compute[274317]: 2026-02-01 09:53:58.588 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:58 localhost nova_compute[274317]: 2026-02-01 09:53:58.677 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:53:58 localhost ovn_controller[152787]: 2026-02-01T09:53:58Z|00118|binding|INFO|Releasing lport f87036fa-d537-4b85-b37c-c486487fff03 from this chassis (sb_readonly=0) Feb 1 04:53:58 localhost ovn_controller[152787]: 2026-02-01T09:53:58Z|00119|binding|INFO|Setting lport f87036fa-d537-4b85-b37c-c486487fff03 down in Southbound Feb 1 04:53:58 localhost kernel: device tapf87036fa-d5 left promiscuous mode Feb 1 04:53:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:58.691 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-64c4abd2-68ab-4da2-b883-4056dccfe81b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c4abd2-68ab-4da2-b883-4056dccfe81b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1713821b0f794e3b830e51e1263a38e8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a20ebc6e-8957-43a9-8b71-59702d481dc9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f87036fa-d537-4b85-b37c-c486487fff03) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:53:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:58.692 158655 INFO neutron.agent.ovn.metadata.agent [-] Port f87036fa-d537-4b85-b37c-c486487fff03 in datapath 64c4abd2-68ab-4da2-b883-4056dccfe81b unbound from our chassis#033[00m Feb 1 04:53:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:58.695 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64c4abd2-68ab-4da2-b883-4056dccfe81b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:53:58 localhost ovn_metadata_agent[158650]: 2026-02-01 09:53:58.696 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe2c731-9fcb-44b7-b736-2e27922f341e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:53:58 localhost nova_compute[274317]: 2026-02-01 09:53:58.711 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:00 localhost podman[236852]: time="2026-02-01T09:54:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:54:00 localhost podman[236852]: @ - - [01/Feb/2026:09:54:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157180 "" "Go-http-client/1.1" Feb 1 04:54:00 localhost podman[236852]: @ - - [01/Feb/2026:09:54:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18771 "" "Go-http-client/1.1" Feb 1 04:54:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s Feb 1 04:54:00 localhost dnsmasq[306796]: exiting on receipt of SIGTERM Feb 1 04:54:00 localhost podman[306974]: 2026-02-01 09:54:00.736475665 +0000 UTC m=+0.059579918 container kill 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 1 04:54:00 localhost systemd[1]: libpod-2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b.scope: Deactivated successfully. Feb 1 04:54:00 localhost nova_compute[274317]: 2026-02-01 09:54:00.742 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:00 localhost podman[306987]: 2026-02-01 09:54:00.818036378 +0000 UTC m=+0.062692385 container died 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127) Feb 1 04:54:00 localhost systemd[1]: tmp-crun.awsOCc.mount: Deactivated successfully. Feb 1 04:54:00 localhost podman[306987]: 2026-02-01 09:54:00.854100602 +0000 UTC m=+0.098756569 container cleanup 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:54:00 localhost systemd[1]: libpod-conmon-2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b.scope: Deactivated successfully. Feb 1 04:54:00 localhost podman[306988]: 2026-02-01 09:54:00.900625072 +0000 UTC m=+0.140945435 container remove 2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64c4abd2-68ab-4da2-b883-4056dccfe81b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:54:00 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:00.930 259225 INFO neutron.agent.dhcp.agent [None req-400aff62-7135-4736-b3a6-720eef750e81 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:54:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:01.340 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:54:01 localhost nova_compute[274317]: 2026-02-01 09:54:01.366 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:01 localhost openstack_network_exporter[239388]: ERROR 09:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:54:01 localhost openstack_network_exporter[239388]: Feb 1 04:54:01 localhost openstack_network_exporter[239388]: ERROR 09:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:54:01 localhost openstack_network_exporter[239388]: Feb 1 04:54:01 localhost systemd[1]: var-lib-containers-storage-overlay-a21cdbba4626f33fa36df1df6f6f66a3be4030297ce337d866e792ea89e48cc9-merged.mount: Deactivated successfully. Feb 1 04:54:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b527cbbb8d1866e999c8ae7231dfe8eb8b5171c4fa767cfc10b56b33fab020b-userdata-shm.mount: Deactivated successfully. Feb 1 04:54:01 localhost systemd[1]: run-netns-qdhcp\x2d64c4abd2\x2d68ab\x2d4da2\x2db883\x2d4056dccfe81b.mount: Deactivated successfully. Feb 1 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:54:01 localhost systemd[1]: tmp-crun.FJJRLy.mount: Deactivated successfully. Feb 1 04:54:01 localhost podman[307015]: 2026-02-01 09:54:01.860366378 +0000 UTC m=+0.095290911 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:54:01 localhost podman[307015]: 2026-02-01 09:54:01.870999629 +0000 UTC m=+0.105924192 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:54:01 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:54:01 localhost nova_compute[274317]: 2026-02-01 09:54:01.944 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 17 op/s Feb 1 04:54:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.0 KiB/s wr, 39 op/s Feb 1 04:54:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:05 localhost nova_compute[274317]: 2026-02-01 09:54:05.787 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.0 KiB/s wr, 39 op/s Feb 1 04:54:06 localhost nova_compute[274317]: 2026-02-01 09:54:06.368 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:06 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e113 e113: 6 total, 6 up, 6 in Feb 1 04:54:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s Feb 1 04:54:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s Feb 1 04:54:10 localhost nova_compute[274317]: 2026-02-01 09:54:10.789 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:11 localhost nova_compute[274317]: 2026-02-01 09:54:11.372 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s Feb 1 04:54:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 1 04:54:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:15 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:15.130 2 INFO neutron.agent.securitygroups_rpc [req-a317a60f-1d94-4e94-8ad3-4c22c8825b6a req-40e6bba5-b2d9-4d66-aeb2-e562a81ad61e aacab7e8f6444706a62ff16c6574833f d0194caf1b6343f4859fdcc75c872cf3 - - default default] Security group rule updated ['639fab50-7eda-41c7-96b9-ca352e9a9f06']#033[00m Feb 1 04:54:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e114 e114: 6 total, 6 up, 6 in Feb 1 04:54:15 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:15.723 2 INFO neutron.agent.securitygroups_rpc [req-4b9050d5-0e1f-4517-a597-752dbe7a20e4 req-3b9daee1-df3b-4626-857b-13f8996518fb aacab7e8f6444706a62ff16c6574833f d0194caf1b6343f4859fdcc75c872cf3 - - default default] Security group rule updated ['639fab50-7eda-41c7-96b9-ca352e9a9f06']#033[00m Feb 1 04:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:54:15 localhost nova_compute[274317]: 2026-02-01 09:54:15.791 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:15 localhost podman[307038]: 2026-02-01 09:54:15.885461717 +0000 UTC m=+0.085821176 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:54:15 localhost podman[307038]: 2026-02-01 09:54:15.924641298 +0000 UTC m=+0.125000737 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true) Feb 1 04:54:15 localhost podman[307039]: 2026-02-01 09:54:15.936394585 +0000 UTC m=+0.134531355 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:54:15 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:54:15 localhost podman[307039]: 2026-02-01 09:54:15.946581432 +0000 UTC m=+0.144718142 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:54:15 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:54:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 1 04:54:16 localhost nova_compute[274317]: 2026-02-01 09:54:16.373 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.508308) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939656508348, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1987, "num_deletes": 261, "total_data_size": 2767283, "memory_usage": 2812848, "flush_reason": "Manual Compaction"} Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939656520186, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1775229, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16690, "largest_seqno": 18672, "table_properties": {"data_size": 1767717, "index_size": 4405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16439, "raw_average_key_size": 20, "raw_value_size": 1752217, "raw_average_value_size": 2187, "num_data_blocks": 193, "num_entries": 801, "num_filter_entries": 801, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939534, "oldest_key_time": 1769939534, "file_creation_time": 1769939656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 11921 microseconds, and 5219 cpu microseconds. Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.520229) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1775229 bytes OK Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.520253) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.522957) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.522981) EVENT_LOG_v1 {"time_micros": 1769939656522975, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.523002) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2758165, prev total WAL file size 2758489, number of live WAL files 2. Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.523792) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373633' seq:72057594037927935, type:22 .. '6C6F676D0034303134' seq:0, type:0; will stop at (end) Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1733KB)], [21(20MB)] Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939656523833, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 23504588, "oldest_snapshot_seqno": -1} Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 12521 keys, 23327628 bytes, temperature: kUnknown Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939656683729, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 23327628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 23253044, "index_size": 42163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31365, "raw_key_size": 336652, "raw_average_key_size": 26, "raw_value_size": 23036583, "raw_average_value_size": 1839, "num_data_blocks": 1607, "num_entries": 12521, "num_filter_entries": 12521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769939656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.684191) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 23327628 bytes Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.686806) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.8 rd, 145.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 20.7 +0.0 blob) out(22.2 +0.0 blob), read-write-amplify(26.4) write-amplify(13.1) OK, records in: 13059, records dropped: 538 output_compression: NoCompression Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.686827) EVENT_LOG_v1 {"time_micros": 1769939656686818, "job": 10, "event": "compaction_finished", "compaction_time_micros": 160151, "compaction_time_cpu_micros": 57526, "output_level": 6, "num_output_files": 1, "total_output_size": 23327628, "num_input_records": 13059, "num_output_records": 12521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939656687139, "job": 10, "event": "table_file_deletion", "file_number": 23} Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939656689142, "job": 10, "event": "table_file_deletion", "file_number": 21} Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.523717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.689206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.689213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.689217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.689220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:16 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:16.689224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:17.027 2 INFO neutron.agent.securitygroups_rpc [None req-f508e6c2-9093-4f47-a287-c55eb4d8e7d1 ff35eaef616c4f428644a9a881f035d4 9bbefd3c06294b7fa7720ba6ca48fa4b - - default default] Security group rule updated ['d6a2366a-be19-483b-bd9c-86227fb6f0c8']#033[00m Feb 1 04:54:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:17.268 2 INFO neutron.agent.securitygroups_rpc [None req-f0037227-79b1-4433-9269-e9d8a6c269aa ff35eaef616c4f428644a9a881f035d4 9bbefd3c06294b7fa7720ba6ca48fa4b - - default default] Security group rule updated ['d6a2366a-be19-483b-bd9c-86227fb6f0c8']#033[00m Feb 1 04:54:17 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e115 e115: 6 total, 6 up, 6 in Feb 1 04:54:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 3.4 KiB/s wr, 41 op/s Feb 1 04:54:18 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e116 e116: 6 total, 6 up, 6 in Feb 1 04:54:19 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e117 e117: 6 total, 6 up, 6 in Feb 1 04:54:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 5.5 KiB/s wr, 68 op/s Feb 1 04:54:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:54:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:54:20 localhost nova_compute[274317]: 2026-02-01 09:54:20.792 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:20 localhost podman[307086]: 2026-02-01 09:54:20.871568433 +0000 UTC m=+0.082987223 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Feb 1 04:54:20 localhost podman[307086]: 2026-02-01 09:54:20.879743405 +0000 UTC m=+0.091162165 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 1 04:54:20 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:54:20 localhost podman[307085]: 2026-02-01 09:54:20.927468804 +0000 UTC m=+0.142513566 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, release=1769056855, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, version=9.7, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Feb 1 04:54:20 localhost podman[307085]: 2026-02-01 09:54:20.939682643 +0000 UTC m=+0.154727405 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1769056855, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container) Feb 1 04:54:20 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:54:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:54:21 Feb 1 04:54:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:54:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:54:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['images', 'manila_metadata', '.mgr', 'manila_data', 'volumes', 'vms', 'backups'] Feb 1 04:54:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:54:21 localhost nova_compute[274317]: 2026-02-01 09:54:21.376 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:54:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:54:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:54:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:54:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:54:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004301291614321608 of space, bias 1.0, pg target 0.8588245589928811 quantized to 32 (current 32) Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:54:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002170138888888889 quantized to 16 (current 16) Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:54:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:54:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 4.5 KiB/s wr, 55 op/s Feb 1 04:54:22 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:22.462 2 INFO neutron.agent.securitygroups_rpc [req-83e0e8cb-3429-4ade-bafe-f7d6f9e3d311 req-5bc5dc3b-802b-43fa-a784-b37e50cbe40a ff35eaef616c4f428644a9a881f035d4 9bbefd3c06294b7fa7720ba6ca48fa4b - - default default] Security group member updated ['d6a2366a-be19-483b-bd9c-86227fb6f0c8']#033[00m Feb 1 04:54:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 192 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 92 KiB/s rd, 3.2 MiB/s wr, 132 op/s Feb 1 04:54:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:54:24 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2769998936' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:54:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e118 e118: 6 total, 6 up, 6 in Feb 1 04:54:25 localhost nova_compute[274317]: 2026-02-01 09:54:25.794 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 192 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 2.8 MiB/s wr, 115 op/s Feb 1 04:54:26 localhost nova_compute[274317]: 2026-02-01 09:54:26.378 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e119 e119: 6 total, 6 up, 6 in Feb 1 04:54:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 192 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 107 KiB/s rd, 2.7 MiB/s wr, 152 op/s Feb 1 04:54:28 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:28.522 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:54:28 localhost nova_compute[274317]: 2026-02-01 09:54:28.523 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:28 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:28.524 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:54:28 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:28.735 2 INFO neutron.agent.securitygroups_rpc [None req-ff9f5f38-da01-49cd-ad4a-92231356a657 ff147cab913d4d439b1d697fdf7e96ba dd3a0e574d0f493cafe8d66c78341de5 - - default default] Security group member updated ['39ab8694-6bb0-4b5a-b2c8-cff6705213f5']#033[00m Feb 1 04:54:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:54:28 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:28.859 259225 INFO neutron.agent.linux.ip_lib [None req-50add5a8-54a9-4f30-886b-965a6a102f12 - - - - - -] Device tap189326ee-2f cannot be used as it has no MAC address#033[00m Feb 1 04:54:28 localhost nova_compute[274317]: 2026-02-01 09:54:28.881 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:28 localhost systemd[1]: tmp-crun.J4EQDp.mount: Deactivated successfully. Feb 1 04:54:28 localhost kernel: device tap189326ee-2f entered promiscuous mode Feb 1 04:54:28 localhost NetworkManager[5972]: [1769939668.9030] manager: (tap189326ee-2f): new Generic device (/org/freedesktop/NetworkManager/Devices/25) Feb 1 04:54:28 localhost nova_compute[274317]: 2026-02-01 09:54:28.904 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:28 localhost ovn_controller[152787]: 2026-02-01T09:54:28Z|00120|binding|INFO|Claiming lport 189326ee-2f74-4f24-9cd3-a164e6fb714b for this chassis. Feb 1 04:54:28 localhost ovn_controller[152787]: 2026-02-01T09:54:28Z|00121|binding|INFO|189326ee-2f74-4f24-9cd3-a164e6fb714b: Claiming unknown Feb 1 04:54:28 localhost podman[307125]: 2026-02-01 09:54:28.906444633 +0000 UTC m=+0.099860376 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, managed_by=edpm_ansible) Feb 1 04:54:28 localhost systemd-udevd[307149]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:54:28 localhost podman[307125]: 2026-02-01 09:54:28.917914118 +0000 UTC m=+0.111329881 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:54:28 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:28.920 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-c02f9419-6799-4a45-bf83-c316a3817c7c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c02f9419-6799-4a45-bf83-c316a3817c7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7aa5c461f9764c8e9c6f7f88a3f3fe97', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08cda268-c34a-48a4-b851-ac14c0cb1641, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=189326ee-2f74-4f24-9cd3-a164e6fb714b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:54:28 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:28.922 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 189326ee-2f74-4f24-9cd3-a164e6fb714b in datapath c02f9419-6799-4a45-bf83-c316a3817c7c bound to our chassis#033[00m Feb 1 04:54:28 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:28.924 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c02f9419-6799-4a45-bf83-c316a3817c7c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:54:28 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:28.925 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[513fd65b-9f65-42ed-a190-e397e77c9d88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:54:28 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:54:28 localhost ovn_controller[152787]: 2026-02-01T09:54:28Z|00122|binding|INFO|Setting lport 189326ee-2f74-4f24-9cd3-a164e6fb714b ovn-installed in OVS Feb 1 04:54:28 localhost ovn_controller[152787]: 2026-02-01T09:54:28Z|00123|binding|INFO|Setting lport 189326ee-2f74-4f24-9cd3-a164e6fb714b up in Southbound Feb 1 04:54:28 localhost nova_compute[274317]: 2026-02-01 09:54:28.952 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:28 localhost nova_compute[274317]: 2026-02-01 09:54:28.985 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:29 localhost nova_compute[274317]: 2026-02-01 09:54:29.013 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:29 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:29.526 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:54:29 localhost podman[307206]: Feb 1 04:54:29 localhost podman[307206]: 2026-02-01 09:54:29.848864353 +0000 UTC m=+0.087156751 container create 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:54:29 localhost systemd[1]: Started libpod-conmon-4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604.scope. Feb 1 04:54:29 localhost podman[307206]: 2026-02-01 09:54:29.807326176 +0000 UTC m=+0.045618624 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:54:29 localhost systemd[1]: Started libcrun container. Feb 1 04:54:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113189f63466168ef0eabf3272676643536a7b94540e37806059557d37db92bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:54:29 localhost podman[307206]: 2026-02-01 09:54:29.942893017 +0000 UTC m=+0.181185425 container init 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 1 04:54:29 localhost dnsmasq[307224]: started, version 2.85 cachesize 150 Feb 1 04:54:29 localhost dnsmasq[307224]: DNS service limited to local subnets Feb 1 04:54:29 localhost dnsmasq[307224]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:54:29 localhost dnsmasq[307224]: warning: no upstream servers configured Feb 1 04:54:29 localhost dnsmasq-dhcp[307224]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:54:29 localhost dnsmasq[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/addn_hosts - 0 addresses Feb 1 04:54:29 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/host Feb 1 04:54:29 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/opts Feb 1 04:54:29 localhost podman[307206]: 2026-02-01 09:54:29.969226863 +0000 UTC m=+0.207519261 container start 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127) Feb 1 04:54:30 localhost podman[236852]: time="2026-02-01T09:54:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:54:30 localhost podman[236852]: @ - - [01/Feb/2026:09:54:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157180 "" "Go-http-client/1.1" Feb 1 04:54:30 localhost podman[236852]: @ - - [01/Feb/2026:09:54:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18767 "" "Go-http-client/1.1" Feb 1 04:54:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:30 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:30.132 259225 INFO neutron.agent.dhcp.agent [None req-8c5e2a40-9261-4f0f-b0c6-6bc97b29844d - - - - - -] DHCP configuration for ports {'26db4edd-796f-4cee-a122-9e82374993e6'} is completed#033[00m Feb 1 04:54:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 192 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 107 KiB/s rd, 2.7 MiB/s wr, 152 op/s Feb 1 04:54:30 localhost nova_compute[274317]: 2026-02-01 09:54:30.796 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:30 localhost systemd[1]: tmp-crun.Mr6ssH.mount: Deactivated successfully. Feb 1 04:54:31 localhost nova_compute[274317]: 2026-02-01 09:54:31.402 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:31 localhost openstack_network_exporter[239388]: ERROR 09:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:54:31 localhost openstack_network_exporter[239388]: Feb 1 04:54:31 localhost openstack_network_exporter[239388]: ERROR 09:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:54:31 localhost openstack_network_exporter[239388]: Feb 1 04:54:32 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:32.152 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:54:31Z, description=, device_id=2d5747c6-cbdf-4151-8b77-e62f81a5dd69, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d960006d-012f-4999-af0e-537b8af1210c, ip_allocation=immediate, mac_address=fa:16:3e:42:59:f8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:26Z, description=, dns_domain=, id=c02f9419-6799-4a45-bf83-c316a3817c7c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-656358925, port_security_enabled=True, project_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4020, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1188, status=ACTIVE, subnets=['b85ccb18-7d4a-4256-96ba-e762f4efe60c'], tags=[], tenant_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, updated_at=2026-02-01T09:54:27Z, vlan_transparent=None, network_id=c02f9419-6799-4a45-bf83-c316a3817c7c, port_security_enabled=False, project_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1219, status=DOWN, tags=[], tenant_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, updated_at=2026-02-01T09:54:31Z on network c02f9419-6799-4a45-bf83-c316a3817c7c#033[00m Feb 1 04:54:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 192 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 24 KiB/s wr, 42 op/s Feb 1 04:54:32 localhost dnsmasq[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/addn_hosts - 1 addresses Feb 1 04:54:32 localhost podman[307242]: 2026-02-01 09:54:32.356512623 +0000 UTC m=+0.056801321 container kill 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 1 04:54:32 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/host Feb 1 04:54:32 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/opts Feb 1 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:54:32 localhost podman[307257]: 2026-02-01 09:54:32.480853746 +0000 UTC m=+0.091207368 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:54:32 localhost podman[307257]: 2026-02-01 09:54:32.494611152 +0000 UTC m=+0.104964784 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:54:32 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:54:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e120 e120: 6 total, 6 up, 6 in Feb 1 04:54:32 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:32.590 259225 INFO neutron.agent.dhcp.agent [None req-14d13840-ac7b-4ed9-8f95-e381216dacfd - - - - - -] DHCP configuration for ports {'d960006d-012f-4999-af0e-537b8af1210c'} is completed#033[00m Feb 1 04:54:33 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:33.576 2 INFO neutron.agent.securitygroups_rpc [None req-9125c9fe-67c0-46c6-98f6-2771b3ce7427 9a33ad723bea40f8bb6325e752986a5b 7b25cdb96bed441fa12160a57bca4d9c - - default default] Security group member updated ['e61e0f68-6135-4301-ab8c-68625c4e91d7']#033[00m Feb 1 04:54:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 304 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 14 MiB/s wr, 174 op/s Feb 1 04:54:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:54:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:54:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:54:34 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:54:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:54:34 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 57610793-d3df-41d6-8fdb-39c5f443f0b7 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:54:34 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 57610793-d3df-41d6-8fdb-39c5f443f0b7 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:54:34 localhost ceph-mgr[278126]: [progress INFO root] Completed event 57610793-d3df-41d6-8fdb-39c5f443f0b7 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:54:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:54:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:54:34 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:54:34 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:54:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e121 e121: 6 total, 6 up, 6 in Feb 1 04:54:34 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:34.705 2 INFO neutron.agent.securitygroups_rpc [None req-1d734dba-1bcf-45d6-b4fb-cb8bacf3e60d 9a33ad723bea40f8bb6325e752986a5b 7b25cdb96bed441fa12160a57bca4d9c - - default default] Security group member updated ['e61e0f68-6135-4301-ab8c-68625c4e91d7']#033[00m Feb 1 04:54:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:35 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:35.144 2 INFO neutron.agent.securitygroups_rpc [None req-546fc9ec-f61e-4cb1-ba06-2baf55334087 ff147cab913d4d439b1d697fdf7e96ba dd3a0e574d0f493cafe8d66c78341de5 - - default default] Security group member updated ['39ab8694-6bb0-4b5a-b2c8-cff6705213f5']#033[00m Feb 1 04:54:35 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:35.218 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:54:31Z, description=, device_id=2d5747c6-cbdf-4151-8b77-e62f81a5dd69, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d960006d-012f-4999-af0e-537b8af1210c, ip_allocation=immediate, mac_address=fa:16:3e:42:59:f8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:26Z, description=, dns_domain=, id=c02f9419-6799-4a45-bf83-c316a3817c7c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-656358925, port_security_enabled=True, project_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4020, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1188, status=ACTIVE, subnets=['b85ccb18-7d4a-4256-96ba-e762f4efe60c'], tags=[], tenant_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, updated_at=2026-02-01T09:54:27Z, vlan_transparent=None, network_id=c02f9419-6799-4a45-bf83-c316a3817c7c, port_security_enabled=False, project_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1219, status=DOWN, tags=[], tenant_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, updated_at=2026-02-01T09:54:31Z on network c02f9419-6799-4a45-bf83-c316a3817c7c#033[00m Feb 1 04:54:35 localhost dnsmasq[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/addn_hosts - 1 addresses Feb 1 04:54:35 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/host Feb 1 04:54:35 localhost podman[307389]: 2026-02-01 09:54:35.452380628 +0000 UTC m=+0.063544479 container kill 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:54:35 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/opts Feb 1 04:54:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e122 e122: 6 total, 6 up, 6 in Feb 1 04:54:35 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:35.646 259225 INFO neutron.agent.dhcp.agent [None req-4b5745f8-2e45-4fe8-925d-bc8f590991bb - - - - - -] DHCP configuration for ports {'d960006d-012f-4999-af0e-537b8af1210c'} is completed#033[00m Feb 1 04:54:35 localhost nova_compute[274317]: 2026-02-01 09:54:35.798 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:35 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:35.954 2 INFO neutron.agent.securitygroups_rpc [None req-9bcabe14-1107-4613-a202-1866c6f3ee13 9a33ad723bea40f8bb6325e752986a5b 7b25cdb96bed441fa12160a57bca4d9c - - default default] Security group member updated ['e61e0f68-6135-4301-ab8c-68625c4e91d7']#033[00m Feb 1 04:54:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 304 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.8 MiB/s rd, 19 MiB/s wr, 175 op/s Feb 1 04:54:36 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:36.300 2 INFO neutron.agent.securitygroups_rpc [None req-7830ec90-7d5e-4488-a4fe-cfe1f6b35ae5 21d02ef23bf34fe3ad07a151844e8a84 7aa5c461f9764c8e9c6f7f88a3f3fe97 - - default default] Security group member updated ['a27a2b34-3872-4d18-89d2-71a867c33b37']#033[00m Feb 1 04:54:36 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:36.340 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:54:35Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=772a1edd-8dae-414e-a3f1-bfe14c7a0938, ip_allocation=immediate, mac_address=fa:16:3e:d6:e7:2b, name=tempest-FloatingIPNegativeTestJSON-1037090788, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:26Z, description=, dns_domain=, id=c02f9419-6799-4a45-bf83-c316a3817c7c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-656358925, port_security_enabled=True, project_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4020, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1188, status=ACTIVE, subnets=['b85ccb18-7d4a-4256-96ba-e762f4efe60c'], tags=[], tenant_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, updated_at=2026-02-01T09:54:27Z, vlan_transparent=None, network_id=c02f9419-6799-4a45-bf83-c316a3817c7c, port_security_enabled=True, project_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a27a2b34-3872-4d18-89d2-71a867c33b37'], standard_attr_id=1250, status=DOWN, tags=[], tenant_id=7aa5c461f9764c8e9c6f7f88a3f3fe97, updated_at=2026-02-01T09:54:36Z on network c02f9419-6799-4a45-bf83-c316a3817c7c#033[00m Feb 1 04:54:36 localhost nova_compute[274317]: 2026-02-01 09:54:36.436 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:36 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:54:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:54:36 localhost podman[307428]: 2026-02-01 09:54:36.588549842 +0000 UTC m=+0.059971638 container kill 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 1 04:54:36 localhost dnsmasq[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/addn_hosts - 2 addresses Feb 1 04:54:36 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/host Feb 1 04:54:36 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/opts Feb 1 04:54:36 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:54:36 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:36.785 259225 INFO neutron.agent.dhcp.agent [None req-b6af9f1a-7cb7-4173-b71f-a0a40a4d3f3a - - - - - -] DHCP configuration for ports {'772a1edd-8dae-414e-a3f1-bfe14c7a0938'} is completed#033[00m Feb 1 04:54:37 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:37.324 2 INFO neutron.agent.securitygroups_rpc [None req-2518325d-e6ff-412d-931e-351c87841bd0 9a33ad723bea40f8bb6325e752986a5b 7b25cdb96bed441fa12160a57bca4d9c - - default default] Security group member updated ['e61e0f68-6135-4301-ab8c-68625c4e91d7']#033[00m Feb 1 04:54:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 192 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 19 MiB/s wr, 268 op/s Feb 1 04:54:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 192 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 4.3 KiB/s wr, 73 op/s Feb 1 04:54:40 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:40.541 2 INFO neutron.agent.securitygroups_rpc [None req-515ecb9d-14c8-49e9-8847-f48fb7c12a8c 21d02ef23bf34fe3ad07a151844e8a84 7aa5c461f9764c8e9c6f7f88a3f3fe97 - - default default] Security group member updated ['a27a2b34-3872-4d18-89d2-71a867c33b37']#033[00m Feb 1 04:54:40 localhost podman[307467]: 2026-02-01 09:54:40.793925226 +0000 UTC m=+0.057837703 container kill 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:54:40 localhost dnsmasq[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/addn_hosts - 1 addresses Feb 1 04:54:40 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/host Feb 1 04:54:40 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/opts Feb 1 04:54:40 localhost nova_compute[274317]: 2026-02-01 09:54:40.800 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:41 localhost nova_compute[274317]: 2026-02-01 09:54:41.470 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 e123: 6 total, 6 up, 6 in Feb 1 04:54:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:41.772 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:54:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:41.773 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:54:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:41.773 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:54:41 localhost podman[307507]: 2026-02-01 09:54:41.821730193 +0000 UTC m=+0.068053799 container kill 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:54:41 localhost dnsmasq[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/addn_hosts - 0 addresses Feb 1 04:54:41 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/host Feb 1 04:54:41 localhost dnsmasq-dhcp[307224]: read /var/lib/neutron/dhcp/c02f9419-6799-4a45-bf83-c316a3817c7c/opts Feb 1 04:54:41 localhost systemd[1]: tmp-crun.LxDESU.mount: Deactivated successfully. Feb 1 04:54:41 localhost ovn_controller[152787]: 2026-02-01T09:54:41Z|00124|binding|INFO|Releasing lport 189326ee-2f74-4f24-9cd3-a164e6fb714b from this chassis (sb_readonly=0) Feb 1 04:54:41 localhost nova_compute[274317]: 2026-02-01 09:54:41.951 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:41 localhost ovn_controller[152787]: 2026-02-01T09:54:41Z|00125|binding|INFO|Setting lport 189326ee-2f74-4f24-9cd3-a164e6fb714b down in Southbound Feb 1 04:54:41 localhost kernel: device tap189326ee-2f left promiscuous mode Feb 1 04:54:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:41.961 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-c02f9419-6799-4a45-bf83-c316a3817c7c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c02f9419-6799-4a45-bf83-c316a3817c7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7aa5c461f9764c8e9c6f7f88a3f3fe97', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08cda268-c34a-48a4-b851-ac14c0cb1641, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=189326ee-2f74-4f24-9cd3-a164e6fb714b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:54:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:41.964 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 189326ee-2f74-4f24-9cd3-a164e6fb714b in datapath c02f9419-6799-4a45-bf83-c316a3817c7c unbound from our chassis#033[00m Feb 1 04:54:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:41.967 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c02f9419-6799-4a45-bf83-c316a3817c7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:54:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:41.968 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf8dc0f-21a4-406e-8e3d-a40bf6449db4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:54:41 localhost nova_compute[274317]: 2026-02-01 09:54:41.972 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 192 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 4.3 KiB/s wr, 73 op/s Feb 1 04:54:42 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:42.246 2 INFO neutron.agent.securitygroups_rpc [None req-85d5be93-4f1a-4c18-9ce2-6a112b54530f 84f3db440e5d42c59396aab4e1ffcfd9 2a205e14a65e4950b2897f78a7089f09 - - default default] Security group member updated ['9edef165-badf-4d99-97d5-46869e0947c8']#033[00m Feb 1 04:54:42 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:42.565 2 INFO neutron.agent.securitygroups_rpc [None req-86fc0e8a-dc01-4804-b333-df33401eb55c ba01912592664d639fa7a27174068a0f a8a2395fa8604962aa6888633ff95bee - - default default] Security group member updated ['adcc453c-f15e-407c-b903-8df7ba9f8ef6']#033[00m Feb 1 04:54:43 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:43.114 2 INFO neutron.agent.securitygroups_rpc [None req-ea460609-a7b7-4b88-8971-9c496984f41d ba01912592664d639fa7a27174068a0f a8a2395fa8604962aa6888633ff95bee - - default default] Security group member updated ['adcc453c-f15e-407c-b903-8df7ba9f8ef6']#033[00m Feb 1 04:54:43 localhost dnsmasq[307224]: exiting on receipt of SIGTERM Feb 1 04:54:43 localhost podman[307549]: 2026-02-01 09:54:43.131253728 +0000 UTC m=+0.064521420 container kill 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:54:43 localhost systemd[1]: libpod-4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604.scope: Deactivated successfully. Feb 1 04:54:43 localhost podman[307563]: 2026-02-01 09:54:43.210188375 +0000 UTC m=+0.061696563 container died 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:54:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604-userdata-shm.mount: Deactivated successfully. Feb 1 04:54:43 localhost podman[307563]: 2026-02-01 09:54:43.245903711 +0000 UTC m=+0.097411859 container cleanup 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:54:43 localhost systemd[1]: libpod-conmon-4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604.scope: Deactivated successfully. Feb 1 04:54:43 localhost podman[307565]: 2026-02-01 09:54:43.29492363 +0000 UTC m=+0.136687346 container remove 4d79b84367e5eeadca304688702fd21953ae916c1a53dd25201c5a123ebed604 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c02f9419-6799-4a45-bf83-c316a3817c7c, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:54:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:43.653 259225 INFO neutron.agent.dhcp.agent [None req-c7ad89cb-3f40-49bb-b07d-34594349c61b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:54:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:43.653 259225 INFO neutron.agent.dhcp.agent [None req-c7ad89cb-3f40-49bb-b07d-34594349c61b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:54:44 localhost systemd[1]: var-lib-containers-storage-overlay-113189f63466168ef0eabf3272676643536a7b94540e37806059557d37db92bb-merged.mount: Deactivated successfully. Feb 1 04:54:44 localhost systemd[1]: run-netns-qdhcp\x2dc02f9419\x2d6799\x2d4a45\x2dbf83\x2dc316a3817c7c.mount: Deactivated successfully. Feb 1 04:54:44 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:44.193 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:54:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 217 MiB data, 892 MiB used, 41 GiB / 42 GiB avail; 341 KiB/s rd, 2.9 MiB/s wr, 129 op/s Feb 1 04:54:44 localhost nova_compute[274317]: 2026-02-01 09:54:44.609 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:44 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:44.662 2 INFO neutron.agent.securitygroups_rpc [None req-9845e2e7-e54c-48b9-9b8f-f8c7a4c52742 84f3db440e5d42c59396aab4e1ffcfd9 2a205e14a65e4950b2897f78a7089f09 - - default default] Security group member updated ['9edef165-badf-4d99-97d5-46869e0947c8']#033[00m Feb 1 04:54:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:45 localhost nova_compute[274317]: 2026-02-01 09:54:45.839 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:46 localhost nova_compute[274317]: 2026-02-01 09:54:46.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:54:46 localhost nova_compute[274317]: 2026-02-01 09:54:46.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:54:46 localhost nova_compute[274317]: 2026-02-01 09:54:46.102 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:54:46 localhost nova_compute[274317]: 2026-02-01 09:54:46.114 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:54:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 217 MiB data, 892 MiB used, 41 GiB / 42 GiB avail; 293 KiB/s rd, 2.5 MiB/s wr, 111 op/s Feb 1 04:54:46 localhost nova_compute[274317]: 2026-02-01 09:54:46.471 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:54:46 localhost podman[307591]: 2026-02-01 09:54:46.862760189 +0000 UTC m=+0.067844114 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:54:46 localhost podman[307591]: 2026-02-01 09:54:46.900732646 +0000 UTC m=+0.105816521 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:54:46 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:54:46 localhost podman[307590]: 2026-02-01 09:54:46.995624777 +0000 UTC m=+0.201439174 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 1 04:54:47 localhost podman[307590]: 2026-02-01 09:54:47.033741458 +0000 UTC m=+0.239555895 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:54:47 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:54:47 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:47.344 259225 INFO neutron.agent.linux.ip_lib [None req-4a49e685-4102-47ab-9f1a-28ef199c1e58 - - - - - -] Device tap9f362718-c5 cannot be used as it has no MAC address#033[00m Feb 1 04:54:47 localhost nova_compute[274317]: 2026-02-01 09:54:47.365 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:47 localhost kernel: device tap9f362718-c5 entered promiscuous mode Feb 1 04:54:47 localhost NetworkManager[5972]: [1769939687.3742] manager: (tap9f362718-c5): new Generic device (/org/freedesktop/NetworkManager/Devices/26) Feb 1 04:54:47 localhost nova_compute[274317]: 2026-02-01 09:54:47.373 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:47 localhost ovn_controller[152787]: 2026-02-01T09:54:47Z|00126|binding|INFO|Claiming lport 9f362718-c529-402d-be4c-23264e6d4d0a for this chassis. Feb 1 04:54:47 localhost ovn_controller[152787]: 2026-02-01T09:54:47Z|00127|binding|INFO|9f362718-c529-402d-be4c-23264e6d4d0a: Claiming unknown Feb 1 04:54:47 localhost systemd-udevd[307649]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:54:47 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:47.386 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-6c3db03b-523e-4bc1-b393-9ebce2d989a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c3db03b-523e-4bc1-b393-9ebce2d989a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e5e9f4ac99471688f0279d307f2650', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19c2267c-00a5-46e3-9993-22d0e5d1c93f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9f362718-c529-402d-be4c-23264e6d4d0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:54:47 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:47.389 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9f362718-c529-402d-be4c-23264e6d4d0a in datapath 6c3db03b-523e-4bc1-b393-9ebce2d989a9 bound to our chassis#033[00m Feb 1 04:54:47 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:47.391 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6c3db03b-523e-4bc1-b393-9ebce2d989a9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:54:47 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:47.393 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[92bcad3c-44c6-4bac-8fe3-5e40c1992622]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:54:47 localhost journal[224955]: ethtool ioctl error on tap9f362718-c5: No such device Feb 1 04:54:47 localhost journal[224955]: ethtool ioctl error on tap9f362718-c5: No such device Feb 1 04:54:47 localhost ovn_controller[152787]: 2026-02-01T09:54:47Z|00128|binding|INFO|Setting lport 9f362718-c529-402d-be4c-23264e6d4d0a ovn-installed in OVS Feb 1 04:54:47 localhost ovn_controller[152787]: 2026-02-01T09:54:47Z|00129|binding|INFO|Setting lport 9f362718-c529-402d-be4c-23264e6d4d0a up in Southbound Feb 1 04:54:47 localhost nova_compute[274317]: 2026-02-01 09:54:47.405 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:47 localhost journal[224955]: ethtool ioctl error on tap9f362718-c5: No such device Feb 1 04:54:47 localhost journal[224955]: ethtool ioctl error on tap9f362718-c5: No such device Feb 1 04:54:47 localhost journal[224955]: ethtool ioctl error on tap9f362718-c5: No such device Feb 1 04:54:47 localhost journal[224955]: ethtool ioctl error on tap9f362718-c5: No such device Feb 1 04:54:47 localhost journal[224955]: ethtool ioctl error on tap9f362718-c5: No such device Feb 1 04:54:47 localhost journal[224955]: ethtool ioctl error on tap9f362718-c5: No such device Feb 1 04:54:47 localhost nova_compute[274317]: 2026-02-01 09:54:47.439 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:47 localhost nova_compute[274317]: 2026-02-01 09:54:47.467 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 324 KiB/s rd, 2.6 MiB/s wr, 72 op/s Feb 1 04:54:48 localhost podman[307722]: Feb 1 04:54:48 localhost podman[307722]: 2026-02-01 09:54:48.243659947 +0000 UTC m=+0.089122692 container create 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 1 04:54:48 localhost systemd[1]: Started libpod-conmon-413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf.scope. Feb 1 04:54:48 localhost systemd[1]: tmp-crun.1seY1C.mount: Deactivated successfully. Feb 1 04:54:48 localhost podman[307722]: 2026-02-01 09:54:48.200564542 +0000 UTC m=+0.046027287 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:54:48 localhost systemd[1]: Started libcrun container. Feb 1 04:54:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be8a4cf6f9b91e437cac744565b8665b9984dca2b679012aff6c3eac617c63b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:54:48 localhost podman[307722]: 2026-02-01 09:54:48.323055087 +0000 UTC m=+0.168517822 container init 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:54:48 localhost podman[307722]: 2026-02-01 09:54:48.332246062 +0000 UTC m=+0.177708797 container start 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:54:48 localhost dnsmasq[307740]: started, version 2.85 cachesize 150 Feb 1 04:54:48 localhost dnsmasq[307740]: DNS service limited to local subnets Feb 1 04:54:48 localhost dnsmasq[307740]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:54:48 localhost dnsmasq[307740]: warning: no upstream servers configured Feb 1 04:54:48 localhost dnsmasq-dhcp[307740]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 1 04:54:48 localhost dnsmasq[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/addn_hosts - 0 addresses Feb 1 04:54:48 localhost dnsmasq-dhcp[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/host Feb 1 04:54:48 localhost dnsmasq-dhcp[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/opts Feb 1 04:54:48 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:48.514 259225 INFO neutron.agent.dhcp.agent [None req-1421531c-62a4-4503-9478-5b8d3a03642e - - - - - -] DHCP configuration for ports {'e1ae0704-eeaa-4346-991d-fe06dc0ead13'} is completed#033[00m Feb 1 04:54:50 localhost nova_compute[274317]: 2026-02-01 09:54:50.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:54:50 localhost nova_compute[274317]: 2026-02-01 09:54:50.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:54:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 324 KiB/s rd, 2.6 MiB/s wr, 72 op/s Feb 1 04:54:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:50.669 2 INFO neutron.agent.securitygroups_rpc [None req-57c956cb-89d5-4885-9663-ca5823a12d21 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['6ebf4d70-9c5f-40a7-b43f-38d30ca97739']#033[00m Feb 1 04:54:50 localhost nova_compute[274317]: 2026-02-01 09:54:50.840 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:50.967 2 INFO neutron.agent.securitygroups_rpc [None req-ca712223-c062-400f-8ed8-8ff5e5903afc 306e307654cf41949f0bb118796a4bc7 8f87cde7f6eb4ef0beb13dc0679c10cb - - default default] Security group member updated ['a498609f-8637-4692-9d11-be96cabae719']#033[00m Feb 1 04:54:51 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:51.015 2 INFO neutron.agent.securitygroups_rpc [None req-c3b6808d-2668-4b98-8bd8-53b9c4ac7a7c 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['6ebf4d70-9c5f-40a7-b43f-38d30ca97739']#033[00m Feb 1 04:54:51 localhost nova_compute[274317]: 2026-02-01 09:54:51.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:54:51 localhost nova_compute[274317]: 2026-02-01 09:54:51.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:54:51 localhost nova_compute[274317]: 2026-02-01 09:54:51.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:54:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:54:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:54:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:54:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:54:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:54:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:54:51 localhost nova_compute[274317]: 2026-02-01 09:54:51.494 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:54:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:54:51 localhost podman[307741]: 2026-02-01 09:54:51.870108143 +0000 UTC m=+0.084413927 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, vcs-type=git, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, release=1769056855) Feb 1 04:54:51 localhost podman[307741]: 2026-02-01 09:54:51.881484635 +0000 UTC m=+0.095790419 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 1 04:54:51 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:54:51 localhost systemd[1]: tmp-crun.2aH7Xt.mount: Deactivated successfully. Feb 1 04:54:51 localhost podman[307742]: 2026-02-01 09:54:51.977722067 +0000 UTC m=+0.189140071 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:54:52 localhost podman[307742]: 2026-02-01 09:54:52.011837685 +0000 UTC m=+0.223255679 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:54:52 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.099 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.116 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.117 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.117 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.117 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.117 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:54:52 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:52.137 2 INFO neutron.agent.securitygroups_rpc [None req-898aca50-e443-4f04-8633-193e8d5d70fe 306e307654cf41949f0bb118796a4bc7 8f87cde7f6eb4ef0beb13dc0679c10cb - - default default] Security group member updated ['a498609f-8637-4692-9d11-be96cabae719']#033[00m Feb 1 04:54:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 304 KiB/s rd, 2.4 MiB/s wr, 67 op/s Feb 1 04:54:52 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:52.519 2 INFO neutron.agent.securitygroups_rpc [None req-194b0ee6-ff55-463b-b49c-e9e305c5f2ea 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:54:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:54:52 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2584361436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.704 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.930 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.931 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11647MB free_disk=41.70072555541992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.931 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.931 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.981 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:54:52 localhost nova_compute[274317]: 2026-02-01 09:54:52.982 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.008 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:54:53 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:53.272 2 INFO neutron.agent.securitygroups_rpc [None req-40a7cf1e-2b3d-4cda-aef2-d6b58ad042f7 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:54:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:54:53 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/678286761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.451 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.458 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.471 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.495 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.495 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:54:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:53.707 259225 INFO neutron.agent.linux.ip_lib [None req-87eae992-28d6-46b3-b307-ebf4256c1112 - - - - - -] Device tap663aeef3-4f cannot be used as it has no MAC address#033[00m Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.729 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:53 localhost kernel: device tap663aeef3-4f entered promiscuous mode Feb 1 04:54:53 localhost NetworkManager[5972]: [1769939693.7359] manager: (tap663aeef3-4f): new Generic device (/org/freedesktop/NetworkManager/Devices/27) Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.735 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:53 localhost ovn_controller[152787]: 2026-02-01T09:54:53Z|00130|binding|INFO|Claiming lport 663aeef3-4f9a-4e46-92e6-29e331b8f905 for this chassis. Feb 1 04:54:53 localhost ovn_controller[152787]: 2026-02-01T09:54:53Z|00131|binding|INFO|663aeef3-4f9a-4e46-92e6-29e331b8f905: Claiming unknown Feb 1 04:54:53 localhost systemd-udevd[307831]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:54:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:53.745 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '904cc8942364443bb4c4a4017bb1e647', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c19dda83-2ee3-4143-9992-3940695b7883, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=663aeef3-4f9a-4e46-92e6-29e331b8f905) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:54:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:53.747 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 663aeef3-4f9a-4e46-92e6-29e331b8f905 in datapath 3a0bb9e2-95cc-4b20-87c6-1e5c55901a39 bound to our chassis#033[00m Feb 1 04:54:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:53.748 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3a0bb9e2-95cc-4b20-87c6-1e5c55901a39 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:54:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:54:53.749 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[704435fe-dfb6-4d1c-a4ac-592ec6b066f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:54:53 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:53.751 2 INFO neutron.agent.securitygroups_rpc [None req-e7666148-591c-4a9b-983e-c90f12ec30cc 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:54:53 localhost journal[224955]: ethtool ioctl error on tap663aeef3-4f: No such device Feb 1 04:54:53 localhost ovn_controller[152787]: 2026-02-01T09:54:53Z|00132|binding|INFO|Setting lport 663aeef3-4f9a-4e46-92e6-29e331b8f905 ovn-installed in OVS Feb 1 04:54:53 localhost ovn_controller[152787]: 2026-02-01T09:54:53Z|00133|binding|INFO|Setting lport 663aeef3-4f9a-4e46-92e6-29e331b8f905 up in Southbound Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.771 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:53 localhost journal[224955]: ethtool ioctl error on tap663aeef3-4f: No such device Feb 1 04:54:53 localhost journal[224955]: ethtool ioctl error on tap663aeef3-4f: No such device Feb 1 04:54:53 localhost journal[224955]: ethtool ioctl error on tap663aeef3-4f: No such device Feb 1 04:54:53 localhost journal[224955]: ethtool ioctl error on tap663aeef3-4f: No such device Feb 1 04:54:53 localhost journal[224955]: ethtool ioctl error on tap663aeef3-4f: No such device Feb 1 04:54:53 localhost journal[224955]: ethtool ioctl error on tap663aeef3-4f: No such device Feb 1 04:54:53 localhost journal[224955]: ethtool ioctl error on tap663aeef3-4f: No such device Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.806 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:53 localhost nova_compute[274317]: 2026-02-01 09:54:53.832 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 68 op/s Feb 1 04:54:54 localhost podman[307902]: Feb 1 04:54:54 localhost podman[307902]: 2026-02-01 09:54:54.599167303 +0000 UTC m=+0.092148037 container create 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:54:54 localhost systemd[1]: Started libpod-conmon-5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf.scope. Feb 1 04:54:54 localhost systemd[1]: Started libcrun container. Feb 1 04:54:54 localhost podman[307902]: 2026-02-01 09:54:54.553271561 +0000 UTC m=+0.046252315 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:54:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c010c25732730f6cf24926dbc34c994a3de25b1f3464184ac39f6a9aeb5aeb15/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:54:54 localhost podman[307902]: 2026-02-01 09:54:54.666265942 +0000 UTC m=+0.159246666 container init 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:54:54 localhost podman[307902]: 2026-02-01 09:54:54.678412688 +0000 UTC m=+0.171393412 container start 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 1 04:54:54 localhost dnsmasq[307921]: started, version 2.85 cachesize 150 Feb 1 04:54:54 localhost dnsmasq[307921]: DNS service limited to local subnets Feb 1 04:54:54 localhost dnsmasq[307921]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:54:54 localhost dnsmasq[307921]: warning: no upstream servers configured Feb 1 04:54:54 localhost dnsmasq-dhcp[307921]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 1 04:54:54 localhost dnsmasq[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/addn_hosts - 0 addresses Feb 1 04:54:54 localhost dnsmasq-dhcp[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/host Feb 1 04:54:54 localhost dnsmasq-dhcp[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/opts Feb 1 04:54:54 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:54.710 2 INFO neutron.agent.securitygroups_rpc [req-a8ee260e-a71a-4341-959a-47320de8959d req-f1ea5bc8-d304-408a-99f9-104affd65e7e ff35eaef616c4f428644a9a881f035d4 9bbefd3c06294b7fa7720ba6ca48fa4b - - default default] Security group member updated ['d6a2366a-be19-483b-bd9c-86227fb6f0c8']#033[00m Feb 1 04:54:54 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:54.736 2 INFO neutron.agent.securitygroups_rpc [None req-7f906aa9-468b-48e8-aab4-90305509c943 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:54:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:54.821 259225 INFO neutron.agent.dhcp.agent [None req-25ece3ce-b5eb-46bf-980f-f32b3ca69e1b - - - - - -] DHCP configuration for ports {'c56dc049-37f4-476f-8f50-1932d09f33f5'} is completed#033[00m Feb 1 04:54:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:54:55 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:55.541 2 INFO neutron.agent.securitygroups_rpc [None req-bc75b288-28fa-41e6-8b23-683fd10099a8 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.617917) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939695617953, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 843, "num_deletes": 255, "total_data_size": 981828, "memory_usage": 996824, "flush_reason": "Manual Compaction"} Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939695623768, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 634977, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18677, "largest_seqno": 19515, "table_properties": {"data_size": 631278, "index_size": 1490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9337, "raw_average_key_size": 20, "raw_value_size": 623480, "raw_average_value_size": 1379, "num_data_blocks": 65, "num_entries": 452, "num_filter_entries": 452, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939656, "oldest_key_time": 1769939656, "file_creation_time": 1769939695, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 5897 microseconds, and 2698 cpu microseconds. Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.623811) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 634977 bytes OK Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.623831) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.626185) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.626204) EVENT_LOG_v1 {"time_micros": 1769939695626198, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.626221) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 977388, prev total WAL file size 977388, number of live WAL files 2. Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.626968) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(620KB)], [24(22MB)] Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939695627012, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 23962605, "oldest_snapshot_seqno": -1} Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 12445 keys, 21349444 bytes, temperature: kUnknown Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939695757765, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 21349444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 21276706, "index_size": 40509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31173, "raw_key_size": 335633, "raw_average_key_size": 26, "raw_value_size": 21062837, "raw_average_value_size": 1692, "num_data_blocks": 1533, "num_entries": 12445, "num_filter_entries": 12445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769939695, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.757987) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 21349444 bytes Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.763541) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.2 rd, 163.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 22.2 +0.0 blob) out(20.4 +0.0 blob), read-write-amplify(71.4) write-amplify(33.6) OK, records in: 12973, records dropped: 528 output_compression: NoCompression Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.763581) EVENT_LOG_v1 {"time_micros": 1769939695763565, "job": 12, "event": "compaction_finished", "compaction_time_micros": 130814, "compaction_time_cpu_micros": 55897, "output_level": 6, "num_output_files": 1, "total_output_size": 21349444, "num_input_records": 12973, "num_output_records": 12445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939695763844, "job": 12, "event": "table_file_deletion", "file_number": 26} Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939695766017, "job": 12, "event": "table_file_deletion", "file_number": 24} Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.626862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.766042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.766047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.766049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.766051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:55 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:55.766053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:55 localhost nova_compute[274317]: 2026-02-01 09:54:55.843 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 104 KiB/s wr, 22 op/s Feb 1 04:54:56 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:56.699 2 INFO neutron.agent.securitygroups_rpc [None req-56483b67-01a0-4213-8f99-ef04c4ba0846 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:54:56 localhost nova_compute[274317]: 2026-02-01 09:54:56.695 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:54:56 localhost nova_compute[274317]: 2026-02-01 09:54:56.697 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:54:56 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:56.705 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:54:55Z, description=, device_id=42b688b0-4c84-4fa7-8d5b-06392b34bb1a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=991bdc44-02fe-46ae-a7ac-3c925253bc9a, ip_allocation=immediate, mac_address=fa:16:3e:29:40:3b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:49Z, description=, dns_domain=, id=3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1709197007, port_security_enabled=True, project_id=904cc8942364443bb4c4a4017bb1e647, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54827, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1357, status=ACTIVE, subnets=['21530e73-d947-4c03-bf9d-7cb1658ac535'], tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:54:52Z, vlan_transparent=None, network_id=3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, port_security_enabled=False, project_id=904cc8942364443bb4c4a4017bb1e647, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1406, status=DOWN, tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:54:55Z on network 3a0bb9e2-95cc-4b20-87c6-1e5c55901a39#033[00m Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.711090) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939696711126, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 258, "num_deletes": 251, "total_data_size": 23386, "memory_usage": 29536, "flush_reason": "Manual Compaction"} Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939696713560, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 14223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19517, "largest_seqno": 19773, "table_properties": {"data_size": 12418, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5180, "raw_average_key_size": 20, "raw_value_size": 8969, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 257, "num_filter_entries": 257, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939696, "oldest_key_time": 1769939696, "file_creation_time": 1769939696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 2519 microseconds, and 890 cpu microseconds. Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.713607) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 14223 bytes OK Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.713628) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.715317) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.715343) EVENT_LOG_v1 {"time_micros": 1769939696715337, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.715363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 21365, prev total WAL file size 21365, number of live WAL files 2. Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.715996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373539' seq:72057594037927935, type:22 .. '6D6772737461740034303131' seq:0, type:0; will stop at (end) Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(13KB)], [27(20MB)] Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939696716029, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 21363667, "oldest_snapshot_seqno": -1} Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12195 keys, 19116671 bytes, temperature: kUnknown Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939696825515, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 19116671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19050547, "index_size": 34535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30533, "raw_key_size": 330632, "raw_average_key_size": 27, "raw_value_size": 18845929, "raw_average_value_size": 1545, "num_data_blocks": 1283, "num_entries": 12195, "num_filter_entries": 12195, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769939696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.825825) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 19116671 bytes Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.827626) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.9 rd, 174.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 20.4 +0.0 blob) out(18.2 +0.0 blob), read-write-amplify(2846.1) write-amplify(1344.1) OK, records in: 12702, records dropped: 507 output_compression: NoCompression Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.827658) EVENT_LOG_v1 {"time_micros": 1769939696827645, "job": 14, "event": "compaction_finished", "compaction_time_micros": 109596, "compaction_time_cpu_micros": 43653, "output_level": 6, "num_output_files": 1, "total_output_size": 19116671, "num_input_records": 12702, "num_output_records": 12195, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939696827803, "job": 14, "event": "table_file_deletion", "file_number": 29} Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939696830797, "job": 14, "event": "table_file_deletion", "file_number": 27} Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.715955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.830851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.830858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.830861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.830864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:56 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:54:56.830867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:54:56 localhost dnsmasq[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/addn_hosts - 1 addresses Feb 1 04:54:56 localhost podman[307940]: 2026-02-01 09:54:56.895870276 +0000 UTC m=+0.054448877 container kill 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:54:56 localhost dnsmasq-dhcp[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/host Feb 1 04:54:56 localhost dnsmasq-dhcp[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/opts Feb 1 04:54:57 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:57.073 259225 INFO neutron.agent.dhcp.agent [None req-a81291ef-6918-46b1-a3bc-1d8acebfb4aa - - - - - -] DHCP configuration for ports {'991bdc44-02fe-46ae-a7ac-3c925253bc9a'} is completed#033[00m Feb 1 04:54:57 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:57.723 2 INFO neutron.agent.securitygroups_rpc [None req-fccfae55-19b6-4730-a30c-150ca6a4b95d 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:54:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 105 KiB/s wr, 42 op/s Feb 1 04:54:58 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:58.464 2 INFO neutron.agent.securitygroups_rpc [None req-91b7ca55-4f70-425c-a0e2-0dabc032162c 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:54:59 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:59.072 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:54:55Z, description=, device_id=42b688b0-4c84-4fa7-8d5b-06392b34bb1a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=991bdc44-02fe-46ae-a7ac-3c925253bc9a, ip_allocation=immediate, mac_address=fa:16:3e:29:40:3b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:49Z, description=, dns_domain=, id=3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1709197007, port_security_enabled=True, project_id=904cc8942364443bb4c4a4017bb1e647, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54827, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1357, status=ACTIVE, subnets=['21530e73-d947-4c03-bf9d-7cb1658ac535'], tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:54:52Z, vlan_transparent=None, network_id=3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, port_security_enabled=False, project_id=904cc8942364443bb4c4a4017bb1e647, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1406, status=DOWN, tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:54:55Z on network 3a0bb9e2-95cc-4b20-87c6-1e5c55901a39#033[00m Feb 1 04:54:59 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:59.116 2 INFO neutron.agent.securitygroups_rpc [None req-e31219af-a122-457f-883b-2593c5b9c745 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:54:59 localhost dnsmasq[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/addn_hosts - 1 addresses Feb 1 04:54:59 localhost dnsmasq-dhcp[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/host Feb 1 04:54:59 localhost dnsmasq-dhcp[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/opts Feb 1 04:54:59 localhost systemd[1]: tmp-crun.Rchxgr.mount: Deactivated successfully. Feb 1 04:54:59 localhost podman[307980]: 2026-02-01 09:54:59.264990894 +0000 UTC m=+0.063570501 container kill 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:54:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:54:59 localhost systemd[1]: tmp-crun.hKRZUQ.mount: Deactivated successfully. Feb 1 04:54:59 localhost podman[307995]: 2026-02-01 09:54:59.385788146 +0000 UTC m=+0.095294053 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 1 04:54:59 localhost podman[307995]: 2026-02-01 09:54:59.424915009 +0000 UTC m=+0.134420906 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:54:59 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:54:59 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:54:59.537 259225 INFO neutron.agent.dhcp.agent [None req-abfe6113-2f9d-490d-b2c3-197feb8f6b12 - - - - - -] DHCP configuration for ports {'991bdc44-02fe-46ae-a7ac-3c925253bc9a'} is completed#033[00m Feb 1 04:54:59 localhost neutron_sriov_agent[252054]: 2026-02-01 09:54:59.635 2 INFO neutron.agent.securitygroups_rpc [None req-72904d6f-3e0e-4e9a-a3b1-a7457a722d24 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['0efcca4c-fcac-48b8-ae72-0742b6eb0b6c']#033[00m Feb 1 04:55:00 localhost podman[236852]: time="2026-02-01T09:55:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:55:00 localhost podman[236852]: @ - - [01/Feb/2026:09:55:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158990 "" "Go-http-client/1.1" Feb 1 04:55:00 localhost podman[236852]: @ - - [01/Feb/2026:09:55:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19243 "" "Go-http-client/1.1" Feb 1 04:55:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s Feb 1 04:55:00 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:00.498 259225 INFO neutron.agent.linux.ip_lib [None req-9f3711b0-0522-4e05-b7ad-f0342ca0796b - - - - - -] Device tapaebf5a93-f1 cannot be used as it has no MAC address#033[00m Feb 1 04:55:00 localhost nova_compute[274317]: 2026-02-01 09:55:00.521 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:00 localhost kernel: device tapaebf5a93-f1 entered promiscuous mode Feb 1 04:55:00 localhost NetworkManager[5972]: [1769939700.5292] manager: (tapaebf5a93-f1): new Generic device (/org/freedesktop/NetworkManager/Devices/28) Feb 1 04:55:00 localhost ovn_controller[152787]: 2026-02-01T09:55:00Z|00134|binding|INFO|Claiming lport aebf5a93-f1df-421a-8bc6-9d245205815f for this chassis. Feb 1 04:55:00 localhost ovn_controller[152787]: 2026-02-01T09:55:00Z|00135|binding|INFO|aebf5a93-f1df-421a-8bc6-9d245205815f: Claiming unknown Feb 1 04:55:00 localhost nova_compute[274317]: 2026-02-01 09:55:00.529 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:00 localhost systemd-udevd[308028]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:55:00 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:00.544 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-9ecb4282-8104-4878-8e0d-966d3ce505f1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ecb4282-8104-4878-8e0d-966d3ce505f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e00f2ed54c74d70847b97f9f434e5e6', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64afe251-cbee-41d8-8098-a70c383c96db, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=aebf5a93-f1df-421a-8bc6-9d245205815f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:55:00 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:00.551 158655 INFO neutron.agent.ovn.metadata.agent [-] Port aebf5a93-f1df-421a-8bc6-9d245205815f in datapath 9ecb4282-8104-4878-8e0d-966d3ce505f1 bound to our chassis#033[00m Feb 1 04:55:00 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:00.553 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ecb4282-8104-4878-8e0d-966d3ce505f1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:55:00 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:00.555 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[bdcc9dc5-0872-4013-9068-0a0284a6d490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:55:00 localhost journal[224955]: ethtool ioctl error on tapaebf5a93-f1: No such device Feb 1 04:55:00 localhost ovn_controller[152787]: 2026-02-01T09:55:00Z|00136|binding|INFO|Setting lport aebf5a93-f1df-421a-8bc6-9d245205815f ovn-installed in OVS Feb 1 04:55:00 localhost ovn_controller[152787]: 2026-02-01T09:55:00Z|00137|binding|INFO|Setting lport aebf5a93-f1df-421a-8bc6-9d245205815f up in Southbound Feb 1 04:55:00 localhost nova_compute[274317]: 2026-02-01 09:55:00.574 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:00 localhost journal[224955]: ethtool ioctl error on tapaebf5a93-f1: No such device Feb 1 04:55:00 localhost journal[224955]: ethtool ioctl error on tapaebf5a93-f1: No such device Feb 1 04:55:00 localhost journal[224955]: ethtool ioctl error on tapaebf5a93-f1: No such device Feb 1 04:55:00 localhost journal[224955]: ethtool ioctl error on tapaebf5a93-f1: No such device Feb 1 04:55:00 localhost journal[224955]: ethtool ioctl error on tapaebf5a93-f1: No such device Feb 1 04:55:00 localhost journal[224955]: ethtool ioctl error on tapaebf5a93-f1: No such device Feb 1 04:55:00 localhost journal[224955]: ethtool ioctl error on tapaebf5a93-f1: No such device Feb 1 04:55:00 localhost nova_compute[274317]: 2026-02-01 09:55:00.613 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:00 localhost nova_compute[274317]: 2026-02-01 09:55:00.636 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:00 localhost nova_compute[274317]: 2026-02-01 09:55:00.844 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:01 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:01.120 2 INFO neutron.agent.securitygroups_rpc [None req-866dbbe7-cd9a-459b-9d5c-70660b94e103 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['e4f60f26-54df-4f21-8c82-cc76833023ab']#033[00m Feb 1 04:55:01 localhost podman[308100]: Feb 1 04:55:01 localhost podman[308100]: 2026-02-01 09:55:01.512725639 +0000 UTC m=+0.088246634 container create 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:55:01 localhost systemd[1]: Started libpod-conmon-9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386.scope. Feb 1 04:55:01 localhost systemd[1]: Started libcrun container. Feb 1 04:55:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca2471c03211d064700b7109f6deb39ac7204f972477d84418c84baebc05a05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:55:01 localhost podman[308100]: 2026-02-01 09:55:01.470616135 +0000 UTC m=+0.046137500 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:55:01 localhost openstack_network_exporter[239388]: ERROR 09:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:55:01 localhost openstack_network_exporter[239388]: Feb 1 04:55:01 localhost openstack_network_exporter[239388]: ERROR 09:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:55:01 localhost openstack_network_exporter[239388]: Feb 1 04:55:01 localhost podman[308100]: 2026-02-01 09:55:01.577187807 +0000 UTC m=+0.152708782 container init 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:55:01 localhost podman[308100]: 2026-02-01 09:55:01.589148157 +0000 UTC m=+0.164669152 container start 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:55:01 localhost dnsmasq[308119]: started, version 2.85 cachesize 150 Feb 1 04:55:01 localhost dnsmasq[308119]: DNS service limited to local subnets Feb 1 04:55:01 localhost dnsmasq[308119]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:55:01 localhost dnsmasq[308119]: warning: no upstream servers configured Feb 1 04:55:01 localhost dnsmasq-dhcp[308119]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:55:01 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 0 addresses Feb 1 04:55:01 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:01 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:01.632 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:01Z, description=, device_id=68d684c2-2d8e-49d4-b723-69387fb1e9d1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3bfd0ccd-300d-4a86-a663-a937cbd871e8, ip_allocation=immediate, mac_address=fa:16:3e:a1:4a:21, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:44Z, description=, dns_domain=, id=6c3db03b-523e-4bc1-b393-9ebce2d989a9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-1734500773, port_security_enabled=True, project_id=b3e5e9f4ac99471688f0279d307f2650, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=52069, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1306, status=ACTIVE, subnets=['68b5b999-50d8-4107-94e6-5f7c15e05d58'], tags=[], tenant_id=b3e5e9f4ac99471688f0279d307f2650, updated_at=2026-02-01T09:54:46Z, vlan_transparent=None, network_id=6c3db03b-523e-4bc1-b393-9ebce2d989a9, port_security_enabled=False, project_id=b3e5e9f4ac99471688f0279d307f2650, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1439, status=DOWN, tags=[], tenant_id=b3e5e9f4ac99471688f0279d307f2650, updated_at=2026-02-01T09:55:01Z on network 6c3db03b-523e-4bc1-b393-9ebce2d989a9#033[00m Feb 1 04:55:01 localhost nova_compute[274317]: 2026-02-01 09:55:01.699 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:01 localhost dnsmasq[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/addn_hosts - 1 addresses Feb 1 04:55:01 localhost dnsmasq-dhcp[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/host Feb 1 04:55:01 localhost dnsmasq-dhcp[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/opts Feb 1 04:55:01 localhost podman[308137]: 2026-02-01 09:55:01.804456479 +0000 UTC m=+0.057714139 container kill 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:55:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:01.807 259225 INFO neutron.agent.dhcp.agent [None req-cc352a97-5595-4bda-951c-24f03655f028 - - - - - -] DHCP configuration for ports {'46d348b0-12c7-4993-ab4e-2bb80e58ed68'} is completed#033[00m Feb 1 04:55:02 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:02.055 259225 INFO neutron.agent.dhcp.agent [None req-a9c866ca-e125-4d1a-81f5-2b8bf247b5e7 - - - - - -] DHCP configuration for ports {'3bfd0ccd-300d-4a86-a663-a937cbd871e8'} is completed#033[00m Feb 1 04:55:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s Feb 1 04:55:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:55:02 localhost podman[308157]: 2026-02-01 09:55:02.592992872 +0000 UTC m=+0.058741141 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:55:02 localhost podman[308157]: 2026-02-01 09:55:02.599491663 +0000 UTC m=+0.065239942 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:55:02 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:55:03 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:03.363 2 INFO neutron.agent.securitygroups_rpc [None req-d4cc719f-d42b-446b-935d-536d497f9b87 cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:55:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:55:03 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:03.457 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:02Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=33d00b58-8fe0-49bf-9ca6-6ab8e48b27a8, ip_allocation=immediate, mac_address=fa:16:3e:f6:75:9f, name=tempest-AllowedAddressPairTestJSON-226722493, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:58Z, description=, dns_domain=, id=9ecb4282-8104-4878-8e0d-966d3ce505f1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1256779838, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46894, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1423, status=ACTIVE, subnets=['07a6bf4e-6f88-44a1-bfbd-3faf418e14ec'], tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:54:59Z, vlan_transparent=None, network_id=9ecb4282-8104-4878-8e0d-966d3ce505f1, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b'], standard_attr_id=1453, status=DOWN, tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:55:02Z on network 9ecb4282-8104-4878-8e0d-966d3ce505f1#033[00m Feb 1 04:55:03 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 1 addresses Feb 1 04:55:03 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:03 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:03 localhost podman[308199]: 2026-02-01 09:55:03.659513877 +0000 UTC m=+0.059054501 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:03 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:03.873 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:01Z, description=, device_id=68d684c2-2d8e-49d4-b723-69387fb1e9d1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3bfd0ccd-300d-4a86-a663-a937cbd871e8, ip_allocation=immediate, mac_address=fa:16:3e:a1:4a:21, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:44Z, description=, dns_domain=, id=6c3db03b-523e-4bc1-b393-9ebce2d989a9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-1734500773, port_security_enabled=True, project_id=b3e5e9f4ac99471688f0279d307f2650, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=52069, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1306, status=ACTIVE, subnets=['68b5b999-50d8-4107-94e6-5f7c15e05d58'], tags=[], tenant_id=b3e5e9f4ac99471688f0279d307f2650, updated_at=2026-02-01T09:54:46Z, vlan_transparent=None, network_id=6c3db03b-523e-4bc1-b393-9ebce2d989a9, port_security_enabled=False, project_id=b3e5e9f4ac99471688f0279d307f2650, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1439, status=DOWN, tags=[], tenant_id=b3e5e9f4ac99471688f0279d307f2650, updated_at=2026-02-01T09:55:01Z on network 6c3db03b-523e-4bc1-b393-9ebce2d989a9#033[00m Feb 1 04:55:03 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:03.922 259225 INFO neutron.agent.dhcp.agent [None req-87c006d8-f43c-4bcd-8a52-3c4da0b64e23 - - - - - -] DHCP configuration for ports {'33d00b58-8fe0-49bf-9ca6-6ab8e48b27a8'} is completed#033[00m Feb 1 04:55:04 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:04.035 2 INFO neutron.agent.securitygroups_rpc [None req-a6c0da30-1e7d-4fdc-b34f-c8211b005180 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['dcd86290-3678-4dc4-8595-e876b5745966']#033[00m Feb 1 04:55:04 localhost dnsmasq[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/addn_hosts - 1 addresses Feb 1 04:55:04 localhost podman[308237]: 2026-02-01 09:55:04.062222885 +0000 UTC m=+0.064997854 container kill 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 1 04:55:04 localhost dnsmasq-dhcp[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/host Feb 1 04:55:04 localhost dnsmasq-dhcp[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/opts Feb 1 04:55:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s Feb 1 04:55:04 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:04.265 259225 INFO neutron.agent.dhcp.agent [None req-778cd3bb-243a-42bd-9307-72402127ed5d - - - - - -] DHCP configuration for ports {'3bfd0ccd-300d-4a86-a663-a937cbd871e8'} is completed#033[00m Feb 1 04:55:04 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:04.590 2 INFO neutron.agent.securitygroups_rpc [None req-14af839d-45a9-4f98-b03e-7a019e6f639f 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['dcd86290-3678-4dc4-8595-e876b5745966']#033[00m Feb 1 04:55:04 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:04.970 2 INFO neutron.agent.securitygroups_rpc [None req-e93a2aaf-9f80-41c1-a11d-131d09e57386 cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:05.088 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:04Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8f90dd66-37b5-4ed3-9dfb-df4f3d5ff644, ip_allocation=immediate, mac_address=fa:16:3e:96:a3:7e, name=tempest-AllowedAddressPairTestJSON-2125362275, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:58Z, description=, dns_domain=, id=9ecb4282-8104-4878-8e0d-966d3ce505f1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1256779838, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46894, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1423, status=ACTIVE, subnets=['07a6bf4e-6f88-44a1-bfbd-3faf418e14ec'], tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:54:59Z, vlan_transparent=None, network_id=9ecb4282-8104-4878-8e0d-966d3ce505f1, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b'], standard_attr_id=1459, status=DOWN, tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:55:04Z on network 9ecb4282-8104-4878-8e0d-966d3ce505f1#033[00m Feb 1 04:55:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:05 localhost dnsmasq[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/addn_hosts - 0 addresses Feb 1 04:55:05 localhost dnsmasq-dhcp[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/host Feb 1 04:55:05 localhost podman[308272]: 2026-02-01 09:55:05.172387603 +0000 UTC m=+0.058399119 container kill 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:05 localhost dnsmasq-dhcp[307921]: read /var/lib/neutron/dhcp/3a0bb9e2-95cc-4b20-87c6-1e5c55901a39/opts Feb 1 04:55:05 localhost systemd[1]: tmp-crun.Em7KP1.mount: Deactivated successfully. Feb 1 04:55:05 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 2 addresses Feb 1 04:55:05 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:05 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:05 localhost podman[308305]: 2026-02-01 09:55:05.295386355 +0000 UTC m=+0.063772267 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:05 localhost nova_compute[274317]: 2026-02-01 09:55:05.336 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:05 localhost ovn_controller[152787]: 2026-02-01T09:55:05Z|00138|binding|INFO|Releasing lport 663aeef3-4f9a-4e46-92e6-29e331b8f905 from this chassis (sb_readonly=0) Feb 1 04:55:05 localhost kernel: device tap663aeef3-4f left promiscuous mode Feb 1 04:55:05 localhost ovn_controller[152787]: 2026-02-01T09:55:05Z|00139|binding|INFO|Setting lport 663aeef3-4f9a-4e46-92e6-29e331b8f905 down in Southbound Feb 1 04:55:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:05.346 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '904cc8942364443bb4c4a4017bb1e647', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c19dda83-2ee3-4143-9992-3940695b7883, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=663aeef3-4f9a-4e46-92e6-29e331b8f905) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:55:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:05.348 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 663aeef3-4f9a-4e46-92e6-29e331b8f905 in datapath 3a0bb9e2-95cc-4b20-87c6-1e5c55901a39 unbound from our chassis#033[00m Feb 1 04:55:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:05.350 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3a0bb9e2-95cc-4b20-87c6-1e5c55901a39 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:55:05 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:05.351 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd33c81-a445-4f26-9204-765404456673]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:55:05 localhost nova_compute[274317]: 2026-02-01 09:55:05.358 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:05.526 259225 INFO neutron.agent.dhcp.agent [None req-4fb1290a-041c-4efb-bccd-405fa6edef15 - - - - - -] DHCP configuration for ports {'8f90dd66-37b5-4ed3-9dfb-df4f3d5ff644'} is completed#033[00m Feb 1 04:55:05 localhost nova_compute[274317]: 2026-02-01 09:55:05.883 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 938 B/s wr, 19 op/s Feb 1 04:55:06 localhost podman[308347]: 2026-02-01 09:55:06.672173645 +0000 UTC m=+0.060482295 container kill 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:06 localhost dnsmasq[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/addn_hosts - 0 addresses Feb 1 04:55:06 localhost dnsmasq-dhcp[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/host Feb 1 04:55:06 localhost dnsmasq-dhcp[307740]: read /var/lib/neutron/dhcp/6c3db03b-523e-4bc1-b393-9ebce2d989a9/opts Feb 1 04:55:06 localhost nova_compute[274317]: 2026-02-01 09:55:06.701 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:07 localhost nova_compute[274317]: 2026-02-01 09:55:07.044 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:07 localhost ovn_controller[152787]: 2026-02-01T09:55:07Z|00140|binding|INFO|Releasing lport 9f362718-c529-402d-be4c-23264e6d4d0a from this chassis (sb_readonly=0) Feb 1 04:55:07 localhost ovn_controller[152787]: 2026-02-01T09:55:07Z|00141|binding|INFO|Setting lport 9f362718-c529-402d-be4c-23264e6d4d0a down in Southbound Feb 1 04:55:07 localhost kernel: device tap9f362718-c5 left promiscuous mode Feb 1 04:55:07 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:07.060 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-6c3db03b-523e-4bc1-b393-9ebce2d989a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c3db03b-523e-4bc1-b393-9ebce2d989a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e5e9f4ac99471688f0279d307f2650', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19c2267c-00a5-46e3-9993-22d0e5d1c93f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9f362718-c529-402d-be4c-23264e6d4d0a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:55:07 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:07.062 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9f362718-c529-402d-be4c-23264e6d4d0a in datapath 6c3db03b-523e-4bc1-b393-9ebce2d989a9 unbound from our chassis#033[00m Feb 1 04:55:07 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:07.064 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6c3db03b-523e-4bc1-b393-9ebce2d989a9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:55:07 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:07.065 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[67171feb-01bb-47ff-8f38-ad261752e1a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:55:07 localhost nova_compute[274317]: 2026-02-01 09:55:07.070 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:07 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:07.158 2 INFO neutron.agent.securitygroups_rpc [None req-943127f9-174a-469a-a7d0-e33db638b827 cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:07 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:07.306 2 INFO neutron.agent.securitygroups_rpc [None req-eb9a1f3c-34ee-4016-ae11-84944f9bb005 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['a5467c7c-cb9b-4aeb-bb09-b5bf7707aed9']#033[00m Feb 1 04:55:07 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 1 addresses Feb 1 04:55:07 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:07 localhost podman[308388]: 2026-02-01 09:55:07.569255811 +0000 UTC m=+0.060809095 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:55:07 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:07 localhost dnsmasq[307921]: exiting on receipt of SIGTERM Feb 1 04:55:07 localhost podman[308423]: 2026-02-01 09:55:07.737637868 +0000 UTC m=+0.062204168 container kill 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:07 localhost systemd[1]: libpod-5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf.scope: Deactivated successfully. Feb 1 04:55:07 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:07.780 2 INFO neutron.agent.securitygroups_rpc [None req-7cd63175-19a6-47c4-a54b-1047c35ebff0 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['a5467c7c-cb9b-4aeb-bb09-b5bf7707aed9']#033[00m Feb 1 04:55:07 localhost podman[308436]: 2026-02-01 09:55:07.807970397 +0000 UTC m=+0.059395931 container died 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:55:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf-userdata-shm.mount: Deactivated successfully. Feb 1 04:55:07 localhost podman[308436]: 2026-02-01 09:55:07.843751086 +0000 UTC m=+0.095176580 container cleanup 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:55:07 localhost systemd[1]: libpod-conmon-5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf.scope: Deactivated successfully. Feb 1 04:55:07 localhost podman[308443]: 2026-02-01 09:55:07.886680526 +0000 UTC m=+0.127478730 container remove 5687c71923a452eddfa3c30666f0b986c793de747f66a501e953d12eec06d6bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a0bb9e2-95cc-4b20-87c6-1e5c55901a39, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Feb 1 04:55:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 938 B/s wr, 19 op/s Feb 1 04:55:08 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:08.209 259225 INFO neutron.agent.dhcp.agent [None req-ce40eaba-906b-4323-8472-c4affb5dcffe - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:08 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:08.292 2 INFO neutron.agent.securitygroups_rpc [None req-d9ca590a-9c4b-410e-b353-3b648a203b3e cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:08 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:08.331 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:07Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=53908e93-cb8b-4f2a-84a4-322ab382b07b, ip_allocation=immediate, mac_address=fa:16:3e:ff:2b:2b, name=tempest-AllowedAddressPairTestJSON-332394374, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:58Z, description=, dns_domain=, id=9ecb4282-8104-4878-8e0d-966d3ce505f1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1256779838, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46894, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1423, status=ACTIVE, subnets=['07a6bf4e-6f88-44a1-bfbd-3faf418e14ec'], tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:54:59Z, vlan_transparent=None, network_id=9ecb4282-8104-4878-8e0d-966d3ce505f1, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b'], standard_attr_id=1466, status=DOWN, tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:55:08Z on network 9ecb4282-8104-4878-8e0d-966d3ce505f1#033[00m Feb 1 04:55:08 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 2 addresses Feb 1 04:55:08 localhost podman[308482]: 2026-02-01 09:55:08.552659132 +0000 UTC m=+0.063726406 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:55:08 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:08 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:08 localhost systemd[1]: var-lib-containers-storage-overlay-c010c25732730f6cf24926dbc34c994a3de25b1f3464184ac39f6a9aeb5aeb15-merged.mount: Deactivated successfully. Feb 1 04:55:08 localhost systemd[1]: run-netns-qdhcp\x2d3a0bb9e2\x2d95cc\x2d4b20\x2d87c6\x2d1e5c55901a39.mount: Deactivated successfully. Feb 1 04:55:08 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:08.785 259225 INFO neutron.agent.dhcp.agent [None req-22b3d707-c8ee-415d-a48c-f813a6026f76 - - - - - -] DHCP configuration for ports {'53908e93-cb8b-4f2a-84a4-322ab382b07b'} is completed#033[00m Feb 1 04:55:09 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:09.357 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:09 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:09.586 2 INFO neutron.agent.securitygroups_rpc [None req-efaae1c0-3e5f-4a79-9705-2ee5fc658831 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['438bf7d2-5c8f-4a0b-9c04-1bbb91f9d2e5']#033[00m Feb 1 04:55:09 localhost nova_compute[274317]: 2026-02-01 09:55:09.816 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:10 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:10.360 2 INFO neutron.agent.securitygroups_rpc [None req-c53a02fb-f287-4ee9-90b1-fd2dea7e171b 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['438bf7d2-5c8f-4a0b-9c04-1bbb91f9d2e5']#033[00m Feb 1 04:55:10 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:10.373 2 INFO neutron.agent.securitygroups_rpc [None req-8c9b29e3-84f5-4ecc-a69f-55a0bba3249d cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:10 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 1 addresses Feb 1 04:55:10 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:10 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:10 localhost podman[308520]: 2026-02-01 09:55:10.675036644 +0000 UTC m=+0.057841823 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:55:10 localhost nova_compute[274317]: 2026-02-01 09:55:10.939 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:11 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:11.310 2 INFO neutron.agent.securitygroups_rpc [None req-f752f535-7480-4edb-a5ff-a77295c4683e 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['438bf7d2-5c8f-4a0b-9c04-1bbb91f9d2e5']#033[00m Feb 1 04:55:11 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:11.631 2 INFO neutron.agent.securitygroups_rpc [None req-61a06c10-15de-40c3-8b7d-3148d6b4f873 cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:11 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:11.703 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:11Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8b27933e-cee1-4d4a-b715-b56f1202d238, ip_allocation=immediate, mac_address=fa:16:3e:70:3a:22, name=tempest-AllowedAddressPairTestJSON-1293059564, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:58Z, description=, dns_domain=, id=9ecb4282-8104-4878-8e0d-966d3ce505f1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1256779838, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46894, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1423, status=ACTIVE, subnets=['07a6bf4e-6f88-44a1-bfbd-3faf418e14ec'], tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:54:59Z, vlan_transparent=None, network_id=9ecb4282-8104-4878-8e0d-966d3ce505f1, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b'], standard_attr_id=1501, status=DOWN, tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:55:11Z on network 9ecb4282-8104-4878-8e0d-966d3ce505f1#033[00m Feb 1 04:55:11 localhost nova_compute[274317]: 2026-02-01 09:55:11.703 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:11 localhost systemd[1]: tmp-crun.Mrz2iy.mount: Deactivated successfully. Feb 1 04:55:11 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 2 addresses Feb 1 04:55:11 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:11 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:11 localhost podman[308558]: 2026-02-01 09:55:11.92744936 +0000 UTC m=+0.071567468 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:55:12 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:12.010 2 INFO neutron.agent.securitygroups_rpc [None req-c4d0e02a-d41e-4d38-a096-515e00cc05ca 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['438bf7d2-5c8f-4a0b-9c04-1bbb91f9d2e5']#033[00m Feb 1 04:55:12 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:12.204 259225 INFO neutron.agent.dhcp.agent [None req-e7be9a8e-5765-4b78-9e79-53b88e8f7202 - - - - - -] DHCP configuration for ports {'8b27933e-cee1-4d4a-b715-b56f1202d238'} is completed#033[00m Feb 1 04:55:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:12 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:12.301 2 INFO neutron.agent.securitygroups_rpc [None req-7c1928f8-098c-4c19-bf11-b8c429ffbdda 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['438bf7d2-5c8f-4a0b-9c04-1bbb91f9d2e5']#033[00m Feb 1 04:55:12 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:12.948 2 INFO neutron.agent.securitygroups_rpc [None req-28204c12-7df5-4c98-a342-64d5ab507d83 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['438bf7d2-5c8f-4a0b-9c04-1bbb91f9d2e5']#033[00m Feb 1 04:55:13 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:13.817 2 INFO neutron.agent.securitygroups_rpc [None req-5b09233a-6601-4e83-a057-9181035eb7ab cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:13 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:13.895 2 INFO neutron.agent.securitygroups_rpc [None req-0b649a65-4805-4006-9abf-770c26af78b1 930a89cab3af43239942c71cee47dc19 904cc8942364443bb4c4a4017bb1e647 - - default default] Security group member updated ['4db01845-8230-4c8d-a3f4-5e942e576ef7']#033[00m Feb 1 04:55:14 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 1 addresses Feb 1 04:55:14 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:14 localhost podman[308596]: 2026-02-01 09:55:14.080369768 +0000 UTC m=+0.059147554 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 1 04:55:14 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:14 localhost podman[308634]: 2026-02-01 09:55:14.636115668 +0000 UTC m=+0.062772886 container kill 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:14 localhost dnsmasq[307740]: exiting on receipt of SIGTERM Feb 1 04:55:14 localhost systemd[1]: libpod-413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf.scope: Deactivated successfully. Feb 1 04:55:14 localhost podman[308648]: 2026-02-01 09:55:14.705987334 +0000 UTC m=+0.054246093 container died 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:55:14 localhost podman[308648]: 2026-02-01 09:55:14.734747434 +0000 UTC m=+0.083006143 container cleanup 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 1 04:55:14 localhost systemd[1]: libpod-conmon-413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf.scope: Deactivated successfully. Feb 1 04:55:14 localhost podman[308650]: 2026-02-01 09:55:14.785392483 +0000 UTC m=+0.127980806 container remove 413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c3db03b-523e-4bc1-b393-9ebce2d989a9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:55:14 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:14.804 2 INFO neutron.agent.securitygroups_rpc [None req-8cf1a02e-dfaa-4f95-a3f5-4d4a9a4c833f 3874381a42e5464e990880c51dfe02ee b7c21c7c3be54a6ca3c24d0fe0d75778 - - default default] Security group rule updated ['be540add-f8ad-43d9-9aea-3a58bb289e01']#033[00m Feb 1 04:55:14 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Feb 1 04:55:15 localhost systemd[1]: var-lib-containers-storage-overlay-2be8a4cf6f9b91e437cac744565b8665b9984dca2b679012aff6c3eac617c63b-merged.mount: Deactivated successfully. Feb 1 04:55:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-413a89d18eb25a0ca4c567ed78323c32827f90fc85cb2f08860837c4e3da79bf-userdata-shm.mount: Deactivated successfully. Feb 1 04:55:15 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:15.087 259225 INFO neutron.agent.dhcp.agent [None req-0c4610f4-844a-488b-b858-a36ad28e1eec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:15 localhost systemd[1]: run-netns-qdhcp\x2d6c3db03b\x2d523e\x2d4bc1\x2db393\x2d9ebce2d989a9.mount: Deactivated successfully. Feb 1 04:55:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:15 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:15.462 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:15 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:15.492 2 INFO neutron.agent.securitygroups_rpc [None req-018e5631-8c87-44c1-9046-0c9a678cac95 cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:15 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:15.606 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:14Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a341db71-f700-4b35-9ac8-bb001c1c4e94, ip_allocation=immediate, mac_address=fa:16:3e:9e:68:6e, name=tempest-AllowedAddressPairTestJSON-1821801941, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:58Z, description=, dns_domain=, id=9ecb4282-8104-4878-8e0d-966d3ce505f1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1256779838, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46894, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1423, status=ACTIVE, subnets=['07a6bf4e-6f88-44a1-bfbd-3faf418e14ec'], tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:54:59Z, vlan_transparent=None, network_id=9ecb4282-8104-4878-8e0d-966d3ce505f1, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b'], standard_attr_id=1514, status=DOWN, tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:55:14Z on network 9ecb4282-8104-4878-8e0d-966d3ce505f1#033[00m Feb 1 04:55:15 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 2 addresses Feb 1 04:55:15 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:15 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:15 localhost podman[308695]: 2026-02-01 09:55:15.819240018 +0000 UTC m=+0.058717411 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Feb 1 04:55:15 localhost nova_compute[274317]: 2026-02-01 09:55:15.942 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:16 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:16.092 259225 INFO neutron.agent.dhcp.agent [None req-f476aad5-2c09-4c3f-8a68-5289e94faf03 - - - - - -] DHCP configuration for ports {'a341db71-f700-4b35-9ac8-bb001c1c4e94'} is completed#033[00m Feb 1 04:55:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:16 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:16.300 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:16 localhost nova_compute[274317]: 2026-02-01 09:55:16.706 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:55:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:17.787 2 INFO neutron.agent.securitygroups_rpc [None req-ab764583-db62-4331-9e90-bc94b6b4e26a cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:17 localhost systemd[1]: tmp-crun.3zNBIk.mount: Deactivated successfully. Feb 1 04:55:17 localhost podman[308717]: 2026-02-01 09:55:17.856528944 +0000 UTC m=+0.072407765 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:55:17 localhost podman[308718]: 2026-02-01 09:55:17.874637615 +0000 UTC m=+0.085666996 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:55:17 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:17.883 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:16Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0ca28df6-cb43-4342-8724-5b036d328dce, ip_allocation=immediate, mac_address=fa:16:3e:6a:39:a6, name=tempest-AllowedAddressPairTestJSON-943410257, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:54:58Z, description=, dns_domain=, id=9ecb4282-8104-4878-8e0d-966d3ce505f1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1256779838, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46894, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1423, status=ACTIVE, subnets=['07a6bf4e-6f88-44a1-bfbd-3faf418e14ec'], tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:54:59Z, vlan_transparent=None, network_id=9ecb4282-8104-4878-8e0d-966d3ce505f1, port_security_enabled=True, project_id=7e00f2ed54c74d70847b97f9f434e5e6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b'], standard_attr_id=1527, status=DOWN, tags=[], tenant_id=7e00f2ed54c74d70847b97f9f434e5e6, updated_at=2026-02-01T09:55:17Z on network 9ecb4282-8104-4878-8e0d-966d3ce505f1#033[00m Feb 1 04:55:17 localhost podman[308718]: 2026-02-01 09:55:17.955350085 +0000 UTC m=+0.166379476 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:55:17 localhost podman[308717]: 2026-02-01 09:55:17.963033574 +0000 UTC m=+0.178912355 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true) Feb 1 04:55:17 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:55:17 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:55:18 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 3 addresses Feb 1 04:55:18 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:18 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:18 localhost podman[308782]: 2026-02-01 09:55:18.085714575 +0000 UTC m=+0.068678009 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 1 04:55:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:18 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:18.267 259225 INFO neutron.agent.dhcp.agent [None req-e0261892-fad4-4779-acc9-60ee82c0ef91 - - - - - -] DHCP configuration for ports {'0ca28df6-cb43-4342-8724-5b036d328dce'} is completed#033[00m Feb 1 04:55:18 localhost systemd[1]: tmp-crun.DjYnRJ.mount: Deactivated successfully. Feb 1 04:55:19 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:19.335 2 INFO neutron.agent.securitygroups_rpc [None req-fa6c5d49-c4dc-46a9-9459-bf711075cc0d 930a89cab3af43239942c71cee47dc19 904cc8942364443bb4c4a4017bb1e647 - - default default] Security group member updated ['4db01845-8230-4c8d-a3f4-5e942e576ef7']#033[00m Feb 1 04:55:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:20 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:20.549 2 INFO neutron.agent.securitygroups_rpc [None req-6dfed5db-0b52-4f95-900a-39e3e0691fbf cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:20 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 2 addresses Feb 1 04:55:20 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:20 localhost podman[308820]: 2026-02-01 09:55:20.800010598 +0000 UTC m=+0.058121182 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:55:20 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:20 localhost nova_compute[274317]: 2026-02-01 09:55:20.968 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:21 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:21.280 2 INFO neutron.agent.securitygroups_rpc [None req-cbc03874-de2d-4db1-8aeb-b67049c1615b cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:55:21 Feb 1 04:55:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:55:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:55:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'manila_data', 'manila_metadata', 'volumes', 'images'] Feb 1 04:55:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:55:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:55:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:55:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:55:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:55:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:55:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:55:21 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 1 addresses Feb 1 04:55:21 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:21 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:21 localhost podman[308856]: 2026-02-01 09:55:21.509469681 +0000 UTC m=+0.059815964 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:55:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002170138888888889 quantized to 16 (current 16) Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:55:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:55:21 localhost nova_compute[274317]: 2026-02-01 09:55:21.708 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:22 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:22.096 2 INFO neutron.agent.securitygroups_rpc [None req-22955056-d501-42ef-9a2c-bf3d181d8fe4 cc5a6d3f99ef4279ae1c5508734703e2 7e00f2ed54c74d70847b97f9f434e5e6 - - default default] Security group member updated ['d3c7388a-ba3b-4ea2-bcb7-369aeac4af1b']#033[00m Feb 1 04:55:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:22 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:22.220 2 INFO neutron.agent.securitygroups_rpc [None req-68b28681-9762-4f24-92af-1fa7309650a4 edcc55a03c02426f897467232a84b22e eeec82e52999475da0fa4e4a4a8effbd - - default default] Security group rule updated ['150b315a-79ca-493c-98be-8b45107659c4']#033[00m Feb 1 04:55:22 localhost dnsmasq[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/addn_hosts - 0 addresses Feb 1 04:55:22 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/host Feb 1 04:55:22 localhost dnsmasq-dhcp[308119]: read /var/lib/neutron/dhcp/9ecb4282-8104-4878-8e0d-966d3ce505f1/opts Feb 1 04:55:22 localhost podman[308893]: 2026-02-01 09:55:22.333165602 +0000 UTC m=+0.062721394 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 1 04:55:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:55:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:55:22 localhost podman[308907]: 2026-02-01 09:55:22.449069124 +0000 UTC m=+0.085037606 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, release=1769056855, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=openstack_network_exporter, architecture=x86_64, version=9.7, io.buildah.version=1.33.7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-01-22T05:09:47Z) Feb 1 04:55:22 localhost podman[308907]: 2026-02-01 09:55:22.463708938 +0000 UTC m=+0.099677400 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, release=1769056855, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9/ubi-minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 1 04:55:22 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:55:22 localhost systemd[1]: tmp-crun.VzOZIy.mount: Deactivated successfully. Feb 1 04:55:22 localhost podman[308908]: 2026-02-01 09:55:22.556539834 +0000 UTC m=+0.186182160 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 1 04:55:22 localhost podman[308908]: 2026-02-01 09:55:22.590804975 +0000 UTC m=+0.220447301 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:22 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:55:23 localhost dnsmasq[308119]: exiting on receipt of SIGTERM Feb 1 04:55:23 localhost podman[308967]: 2026-02-01 09:55:23.061043316 +0000 UTC m=+0.054619053 container kill 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:23 localhost systemd[1]: libpod-9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386.scope: Deactivated successfully. Feb 1 04:55:23 localhost podman[308979]: 2026-02-01 09:55:23.134954606 +0000 UTC m=+0.058481402 container died 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:55:23 localhost podman[308979]: 2026-02-01 09:55:23.166461963 +0000 UTC m=+0.089988719 container cleanup 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 04:55:23 localhost systemd[1]: libpod-conmon-9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386.scope: Deactivated successfully. Feb 1 04:55:23 localhost podman[308981]: 2026-02-01 09:55:23.214889993 +0000 UTC m=+0.131765934 container remove 9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ecb4282-8104-4878-8e0d-966d3ce505f1, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 1 04:55:23 localhost nova_compute[274317]: 2026-02-01 09:55:23.262 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:23 localhost kernel: device tapaebf5a93-f1 left promiscuous mode Feb 1 04:55:23 localhost ovn_controller[152787]: 2026-02-01T09:55:23Z|00142|binding|INFO|Releasing lport aebf5a93-f1df-421a-8bc6-9d245205815f from this chassis (sb_readonly=0) Feb 1 04:55:23 localhost ovn_controller[152787]: 2026-02-01T09:55:23Z|00143|binding|INFO|Setting lport aebf5a93-f1df-421a-8bc6-9d245205815f down in Southbound Feb 1 04:55:23 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:23.277 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-9ecb4282-8104-4878-8e0d-966d3ce505f1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ecb4282-8104-4878-8e0d-966d3ce505f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e00f2ed54c74d70847b97f9f434e5e6', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64afe251-cbee-41d8-8098-a70c383c96db, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=aebf5a93-f1df-421a-8bc6-9d245205815f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:55:23 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:23.279 158655 INFO neutron.agent.ovn.metadata.agent [-] Port aebf5a93-f1df-421a-8bc6-9d245205815f in datapath 9ecb4282-8104-4878-8e0d-966d3ce505f1 unbound from our chassis#033[00m Feb 1 04:55:23 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:23.282 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ecb4282-8104-4878-8e0d-966d3ce505f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:55:23 localhost nova_compute[274317]: 2026-02-01 09:55:23.283 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:23 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:23.285 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[7c20b04b-5332-4749-a622-1b2ccfb9220c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:55:23 localhost systemd[1]: var-lib-containers-storage-overlay-4ca2471c03211d064700b7109f6deb39ac7204f972477d84418c84baebc05a05-merged.mount: Deactivated successfully. Feb 1 04:55:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d217b20f4d879dfce4c9830258190d034479b66007eecdbbb50fff980de1386-userdata-shm.mount: Deactivated successfully. Feb 1 04:55:23 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:23.553 259225 INFO neutron.agent.dhcp.agent [None req-26241b25-ba26-48db-bff7-7bc075a8eea3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:23 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:23.554 259225 INFO neutron.agent.dhcp.agent [None req-26241b25-ba26-48db-bff7-7bc075a8eea3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:23 localhost systemd[1]: run-netns-qdhcp\x2d9ecb4282\x2d8104\x2d4878\x2d8e0d\x2d966d3ce505f1.mount: Deactivated successfully. Feb 1 04:55:24 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:24.057 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:24 localhost nova_compute[274317]: 2026-02-01 09:55:24.446 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e124 e124: 6 total, 6 up, 6 in Feb 1 04:55:25 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:25.353 259225 INFO neutron.agent.linux.ip_lib [None req-eeefd9da-3351-403c-aeb1-adf9d385a92d - - - - - -] Device tapeaa11732-01 cannot be used as it has no MAC address#033[00m Feb 1 04:55:25 localhost nova_compute[274317]: 2026-02-01 09:55:25.379 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:25 localhost kernel: device tapeaa11732-01 entered promiscuous mode Feb 1 04:55:25 localhost NetworkManager[5972]: [1769939725.3854] manager: (tapeaa11732-01): new Generic device (/org/freedesktop/NetworkManager/Devices/29) Feb 1 04:55:25 localhost ovn_controller[152787]: 2026-02-01T09:55:25Z|00144|binding|INFO|Claiming lport eaa11732-01f9-489d-9b7f-3f8e2175bbb2 for this chassis. Feb 1 04:55:25 localhost nova_compute[274317]: 2026-02-01 09:55:25.387 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:25 localhost ovn_controller[152787]: 2026-02-01T09:55:25Z|00145|binding|INFO|eaa11732-01f9-489d-9b7f-3f8e2175bbb2: Claiming unknown Feb 1 04:55:25 localhost systemd-udevd[309020]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:55:25 localhost nova_compute[274317]: 2026-02-01 09:55:25.392 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:25.397 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-0f958be9-2a71-46b8-be29-bf69d602dea7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f958be9-2a71-46b8-be29-bf69d602dea7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '904cc8942364443bb4c4a4017bb1e647', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27418984-2ed5-4c42-a2ac-15243821a950, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=eaa11732-01f9-489d-9b7f-3f8e2175bbb2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:55:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:25.399 158655 INFO neutron.agent.ovn.metadata.agent [-] Port eaa11732-01f9-489d-9b7f-3f8e2175bbb2 in datapath 0f958be9-2a71-46b8-be29-bf69d602dea7 bound to our chassis#033[00m Feb 1 04:55:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:25.401 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f958be9-2a71-46b8-be29-bf69d602dea7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:55:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:25.402 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[dd770504-095f-401d-a7bb-3c70844baa0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:55:25 localhost journal[224955]: ethtool ioctl error on tapeaa11732-01: No such device Feb 1 04:55:25 localhost ovn_controller[152787]: 2026-02-01T09:55:25Z|00146|binding|INFO|Setting lport eaa11732-01f9-489d-9b7f-3f8e2175bbb2 ovn-installed in OVS Feb 1 04:55:25 localhost ovn_controller[152787]: 2026-02-01T09:55:25Z|00147|binding|INFO|Setting lport eaa11732-01f9-489d-9b7f-3f8e2175bbb2 up in Southbound Feb 1 04:55:25 localhost nova_compute[274317]: 2026-02-01 09:55:25.420 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:25 localhost journal[224955]: ethtool ioctl error on tapeaa11732-01: No such device Feb 1 04:55:25 localhost journal[224955]: ethtool ioctl error on tapeaa11732-01: No such device Feb 1 04:55:25 localhost journal[224955]: ethtool ioctl error on tapeaa11732-01: No such device Feb 1 04:55:25 localhost journal[224955]: ethtool ioctl error on tapeaa11732-01: No such device Feb 1 04:55:25 localhost journal[224955]: ethtool ioctl error on tapeaa11732-01: No such device Feb 1 04:55:25 localhost journal[224955]: ethtool ioctl error on tapeaa11732-01: No such device Feb 1 04:55:25 localhost journal[224955]: ethtool ioctl error on tapeaa11732-01: No such device Feb 1 04:55:25 localhost nova_compute[274317]: 2026-02-01 09:55:25.458 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:25 localhost nova_compute[274317]: 2026-02-01 09:55:25.527 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:25 localhost nova_compute[274317]: 2026-02-01 09:55:25.970 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:26 localhost podman[309091]: Feb 1 04:55:26 localhost podman[309091]: 2026-02-01 09:55:26.277726995 +0000 UTC m=+0.086795641 container create 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:55:26 localhost systemd[1]: Started libpod-conmon-1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4.scope. Feb 1 04:55:26 localhost systemd[1]: tmp-crun.jV521i.mount: Deactivated successfully. Feb 1 04:55:26 localhost podman[309091]: 2026-02-01 09:55:26.233626349 +0000 UTC m=+0.042695025 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:55:26 localhost systemd[1]: Started libcrun container. Feb 1 04:55:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368071a3dbed367c88ceb2ce8433e3b17be46f6454c6658be28f451008d9eca3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:55:26 localhost podman[309091]: 2026-02-01 09:55:26.36115942 +0000 UTC m=+0.170228066 container init 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 1 04:55:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e125 e125: 6 total, 6 up, 6 in Feb 1 04:55:26 localhost podman[309091]: 2026-02-01 09:55:26.36987972 +0000 UTC m=+0.178948376 container start 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 1 04:55:26 localhost dnsmasq[309109]: started, version 2.85 cachesize 150 Feb 1 04:55:26 localhost dnsmasq[309109]: DNS service limited to local subnets Feb 1 04:55:26 localhost dnsmasq[309109]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:55:26 localhost dnsmasq[309109]: warning: no upstream servers configured Feb 1 04:55:26 localhost dnsmasq-dhcp[309109]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 1 04:55:26 localhost dnsmasq[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/addn_hosts - 0 addresses Feb 1 04:55:26 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/host Feb 1 04:55:26 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/opts Feb 1 04:55:26 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:26.444 259225 INFO neutron.agent.dhcp.agent [None req-eeefd9da-3351-403c-aeb1-adf9d385a92d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:25Z, description=, device_id=faab0309-c85c-4332-a9f4-449a0ffeae16, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=355f615d-2122-4356-9769-53760b28d43d, ip_allocation=immediate, mac_address=fa:16:3e:ea:eb:ca, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:55:22Z, description=, dns_domain=, id=0f958be9-2a71-46b8-be29-bf69d602dea7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-160748546, port_security_enabled=True, project_id=904cc8942364443bb4c4a4017bb1e647, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40133, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1549, status=ACTIVE, subnets=['93832961-de74-40ac-80ba-b5c061596bf4'], tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:55:24Z, vlan_transparent=None, network_id=0f958be9-2a71-46b8-be29-bf69d602dea7, port_security_enabled=False, project_id=904cc8942364443bb4c4a4017bb1e647, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1571, status=DOWN, tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:55:25Z on network 0f958be9-2a71-46b8-be29-bf69d602dea7#033[00m Feb 1 04:55:26 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:26.555 259225 INFO neutron.agent.dhcp.agent [None req-b19bd063-8896-4fe7-bd72-4c53d5aa80ed - - - - - -] DHCP configuration for ports {'862ccf8d-641e-477e-8e4f-8de14626e350'} is completed#033[00m Feb 1 04:55:26 localhost dnsmasq[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/addn_hosts - 1 addresses Feb 1 04:55:26 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/host Feb 1 04:55:26 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/opts Feb 1 04:55:26 localhost podman[309128]: 2026-02-01 09:55:26.629930628 +0000 UTC m=+0.060055151 container kill 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127) Feb 1 04:55:26 localhost nova_compute[274317]: 2026-02-01 09:55:26.709 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:26 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:26.901 259225 INFO neutron.agent.dhcp.agent [None req-677b17b3-5850-47cf-89df-6737ac4422e6 - - - - - -] DHCP configuration for ports {'355f615d-2122-4356-9769-53760b28d43d'} is completed#033[00m Feb 1 04:55:27 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:27.130 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:25Z, description=, device_id=faab0309-c85c-4332-a9f4-449a0ffeae16, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=355f615d-2122-4356-9769-53760b28d43d, ip_allocation=immediate, mac_address=fa:16:3e:ea:eb:ca, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:55:22Z, description=, dns_domain=, id=0f958be9-2a71-46b8-be29-bf69d602dea7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-160748546, port_security_enabled=True, project_id=904cc8942364443bb4c4a4017bb1e647, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40133, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1549, status=ACTIVE, subnets=['93832961-de74-40ac-80ba-b5c061596bf4'], tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:55:24Z, vlan_transparent=None, network_id=0f958be9-2a71-46b8-be29-bf69d602dea7, port_security_enabled=False, project_id=904cc8942364443bb4c4a4017bb1e647, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1571, status=DOWN, tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:55:25Z on network 0f958be9-2a71-46b8-be29-bf69d602dea7#033[00m Feb 1 04:55:27 localhost podman[309168]: 2026-02-01 09:55:27.317708188 +0000 UTC m=+0.057848662 container kill 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:55:27 localhost dnsmasq[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/addn_hosts - 1 addresses Feb 1 04:55:27 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/host Feb 1 04:55:27 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/opts Feb 1 04:55:27 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:27.864 259225 INFO neutron.agent.dhcp.agent [None req-45d44043-08f4-444f-aaf8-3a05b2f1566a - - - - - -] DHCP configuration for ports {'355f615d-2122-4356-9769-53760b28d43d'} is completed#033[00m Feb 1 04:55:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s Feb 1 04:55:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e126 e126: 6 total, 6 up, 6 in Feb 1 04:55:28 localhost dnsmasq[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/addn_hosts - 0 addresses Feb 1 04:55:28 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/host Feb 1 04:55:28 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/0f958be9-2a71-46b8-be29-bf69d602dea7/opts Feb 1 04:55:28 localhost podman[309205]: 2026-02-01 09:55:28.833637181 +0000 UTC m=+0.057832144 container kill 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 1 04:55:29 localhost ovn_controller[152787]: 2026-02-01T09:55:29Z|00148|binding|INFO|Releasing lport eaa11732-01f9-489d-9b7f-3f8e2175bbb2 from this chassis (sb_readonly=0) Feb 1 04:55:29 localhost kernel: device tapeaa11732-01 left promiscuous mode Feb 1 04:55:29 localhost ovn_controller[152787]: 2026-02-01T09:55:29Z|00149|binding|INFO|Setting lport eaa11732-01f9-489d-9b7f-3f8e2175bbb2 down in Southbound Feb 1 04:55:29 localhost nova_compute[274317]: 2026-02-01 09:55:29.053 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:29 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:29.065 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-0f958be9-2a71-46b8-be29-bf69d602dea7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f958be9-2a71-46b8-be29-bf69d602dea7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '904cc8942364443bb4c4a4017bb1e647', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27418984-2ed5-4c42-a2ac-15243821a950, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=eaa11732-01f9-489d-9b7f-3f8e2175bbb2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:55:29 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:29.067 158655 INFO neutron.agent.ovn.metadata.agent [-] Port eaa11732-01f9-489d-9b7f-3f8e2175bbb2 in datapath 0f958be9-2a71-46b8-be29-bf69d602dea7 unbound from our chassis#033[00m Feb 1 04:55:29 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:29.069 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f958be9-2a71-46b8-be29-bf69d602dea7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:55:29 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:29.070 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[7242f089-db43-4182-99ac-3092a5ea36bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:55:29 localhost nova_compute[274317]: 2026-02-01 09:55:29.071 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:29 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:29.625 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:55:29 localhost nova_compute[274317]: 2026-02-01 09:55:29.626 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:29 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:29.627 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:55:29 localhost systemd[1]: tmp-crun.fzz6db.mount: Deactivated successfully. Feb 1 04:55:29 localhost podman[309227]: 2026-02-01 09:55:29.874748979 +0000 UTC m=+0.084254392 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2) Feb 1 04:55:29 localhost podman[309227]: 2026-02-01 09:55:29.885182142 +0000 UTC m=+0.094687595 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS) Feb 1 04:55:29 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:55:30 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:30.001 2 INFO neutron.agent.securitygroups_rpc [None req-83329cbf-bffb-48da-a04c-50fb44fb93a0 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:30 localhost podman[236852]: time="2026-02-01T09:55:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:55:30 localhost dnsmasq[309109]: exiting on receipt of SIGTERM Feb 1 04:55:30 localhost systemd[1]: tmp-crun.Z4xu1g.mount: Deactivated successfully. Feb 1 04:55:30 localhost systemd[1]: libpod-1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4.scope: Deactivated successfully. Feb 1 04:55:30 localhost podman[309262]: 2026-02-01 09:55:30.035451779 +0000 UTC m=+0.063885741 container kill 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127) Feb 1 04:55:30 localhost podman[236852]: @ - - [01/Feb/2026:09:55:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157173 "" "Go-http-client/1.1" Feb 1 04:55:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:30 localhost podman[236852]: 2026-02-01 09:55:30.122170406 +0000 UTC m=+1786.269114419 container died 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:55:30 localhost podman[236852]: @ - - [01/Feb/2026:09:55:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18657 "" "Go-http-client/1.1" Feb 1 04:55:30 localhost podman[309274]: 2026-02-01 09:55:30.183883958 +0000 UTC m=+0.133617401 container cleanup 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:55:30 localhost systemd[1]: libpod-conmon-1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4.scope: Deactivated successfully. Feb 1 04:55:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s Feb 1 04:55:30 localhost podman[309276]: 2026-02-01 09:55:30.236201128 +0000 UTC m=+0.179797971 container remove 1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f958be9-2a71-46b8-be29-bf69d602dea7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:30 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:30.261 259225 INFO neutron.agent.dhcp.agent [None req-adaa159b-a0b4-4e1e-93aa-04402ed92f9a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:30 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:30.415 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e127 e127: 6 total, 6 up, 6 in Feb 1 04:55:30 localhost nova_compute[274317]: 2026-02-01 09:55:30.648 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:30 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:30.831 2 INFO neutron.agent.securitygroups_rpc [None req-32692fbf-89c1-4916-95dd-247e36fdbe6a e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:30 localhost systemd[1]: var-lib-containers-storage-overlay-368071a3dbed367c88ceb2ce8433e3b17be46f6454c6658be28f451008d9eca3-merged.mount: Deactivated successfully. Feb 1 04:55:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1667d668f9d4040ea369c9fa887e4d24551b66133bc87d19b634731f42c30bd4-userdata-shm.mount: Deactivated successfully. Feb 1 04:55:30 localhost systemd[1]: run-netns-qdhcp\x2d0f958be9\x2d2a71\x2d46b8\x2dbe29\x2dbf69d602dea7.mount: Deactivated successfully. Feb 1 04:55:30 localhost nova_compute[274317]: 2026-02-01 09:55:30.972 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:31 localhost openstack_network_exporter[239388]: ERROR 09:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:55:31 localhost openstack_network_exporter[239388]: Feb 1 04:55:31 localhost openstack_network_exporter[239388]: ERROR 09:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:55:31 localhost openstack_network_exporter[239388]: Feb 1 04:55:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e128 e128: 6 total, 6 up, 6 in Feb 1 04:55:31 localhost nova_compute[274317]: 2026-02-01 09:55:31.711 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 525 B/s rd, 1.0 KiB/s wr, 2 op/s Feb 1 04:55:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:55:32 localhost podman[309303]: 2026-02-01 09:55:32.851500584 +0000 UTC m=+0.068479024 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:55:32 localhost podman[309303]: 2026-02-01 09:55:32.864684323 +0000 UTC m=+0.081662703 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:55:32 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:55:33 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:33.929 2 INFO neutron.agent.securitygroups_rpc [None req-d273fe1f-00e9-4cf8-ba92-3b038868502e e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 48 op/s Feb 1 04:55:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:55:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3636105754' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:55:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:55:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3636105754' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:55:34 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:34.990 2 INFO neutron.agent.securitygroups_rpc [None req-cdd29e27-3485-4128-94e1-06734434ccf5 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:55:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:55:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:55:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:55:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:55:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:55:35 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:55:35 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:55:35 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:55:35 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:55:35 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:55:35 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:55:35 localhost nova_compute[274317]: 2026-02-01 09:55:35.974 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:55:36 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:55:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:55:36 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:55:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:55:36 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 88a42565-8b01-4ae4-85fa-58e794a8e3e8 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:55:36 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 88a42565-8b01-4ae4-85fa-58e794a8e3e8 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:55:36 localhost ceph-mgr[278126]: [progress INFO root] Completed event 88a42565-8b01-4ae4-85fa-58e794a8e3e8 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:55:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:55:36 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:55:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 1.3 KiB/s wr, 37 op/s Feb 1 04:55:36 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:36.312 2 INFO neutron.agent.securitygroups_rpc [None req-0cc22e1e-f882-4556-8673-0cf2b002c102 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e129 e129: 6 total, 6 up, 6 in Feb 1 04:55:36 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:55:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:55:36 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:55:36 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:55:36 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:55:36 localhost nova_compute[274317]: 2026-02-01 09:55:36.716 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:37 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:37.288 2 INFO neutron.agent.securitygroups_rpc [None req-9031f569-eff7-411d-8454-e1e2bf358206 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:37 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:37.650 2 INFO neutron.agent.securitygroups_rpc [None req-c13e4698-12de-4fa4-84de-f7194e33c853 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 45 op/s Feb 1 04:55:38 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:38.387 2 INFO neutron.agent.securitygroups_rpc [None req-c43d2a56-e16d-478a-abbd-9e2bb456c208 d96cff636365480c93dc8d1f3e16c531 272972c8d99e4a5c99e73e4bdb72346d - - default default] Security group rule updated ['56a3691b-0dfa-477a-aaac-6fc6d2066735']#033[00m Feb 1 04:55:38 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:38.536 2 INFO neutron.agent.securitygroups_rpc [None req-e32d5a82-c050-454c-9bc9-fb7e97cffc23 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:38 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:38.677 2 INFO neutron.agent.securitygroups_rpc [None req-b42ac328-eb3f-4f32-8c68-ad479176a68e e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:39 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:39.502 2 INFO neutron.agent.securitygroups_rpc [None req-89d33254-82ae-4671-bea8-643bd9d50212 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:39 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:39.630 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:55:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 1.5 KiB/s wr, 40 op/s Feb 1 04:55:40 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:40.956 2 INFO neutron.agent.securitygroups_rpc [None req-f70d179e-59ca-44a3-8018-26f7c96f2b8c 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:40 localhost nova_compute[274317]: 2026-02-01 09:55:40.976 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:41 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:41.642 2 INFO neutron.agent.securitygroups_rpc [None req-3276d6b5-a480-4f28-8680-1993dd5ca124 565c83edf044493d9bd1199ad90d627d 28f88f2edf4c492c9754208b1c502849 - - default default] Security group member updated ['f98fef45-df22-4656-9ceb-98910abc5fa5']#033[00m Feb 1 04:55:41 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:41.657 259225 INFO neutron.agent.linux.ip_lib [None req-22ee0ef7-2461-4635-aaab-6107be9deebd - - - - - -] Device tap750c32c9-1c cannot be used as it has no MAC address#033[00m Feb 1 04:55:41 localhost nova_compute[274317]: 2026-02-01 09:55:41.678 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:41 localhost kernel: device tap750c32c9-1c entered promiscuous mode Feb 1 04:55:41 localhost NetworkManager[5972]: [1769939741.6865] manager: (tap750c32c9-1c): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Feb 1 04:55:41 localhost ovn_controller[152787]: 2026-02-01T09:55:41Z|00150|binding|INFO|Claiming lport 750c32c9-1ccc-42ba-84bc-e13c95225798 for this chassis. Feb 1 04:55:41 localhost ovn_controller[152787]: 2026-02-01T09:55:41Z|00151|binding|INFO|750c32c9-1ccc-42ba-84bc-e13c95225798: Claiming unknown Feb 1 04:55:41 localhost nova_compute[274317]: 2026-02-01 09:55:41.690 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:41 localhost systemd-udevd[309479]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:55:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:41.705 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-ae16cdd8-4ef0-4acb-9779-9431fa50e220', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ae16cdd8-4ef0-4acb-9779-9431fa50e220', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f88f2edf4c492c9754208b1c502849', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea5af0e4-f5ed-413c-862a-945a06818c24, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=750c32c9-1ccc-42ba-84bc-e13c95225798) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:55:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:41.707 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 750c32c9-1ccc-42ba-84bc-e13c95225798 in datapath ae16cdd8-4ef0-4acb-9779-9431fa50e220 bound to our chassis#033[00m Feb 1 04:55:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:41.711 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ae16cdd8-4ef0-4acb-9779-9431fa50e220 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:55:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:41.712 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0fd155-15ee-4687-9d84-24a1587a0a9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:55:41 localhost nova_compute[274317]: 2026-02-01 09:55:41.733 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:41 localhost ovn_controller[152787]: 2026-02-01T09:55:41Z|00152|binding|INFO|Setting lport 750c32c9-1ccc-42ba-84bc-e13c95225798 ovn-installed in OVS Feb 1 04:55:41 localhost ovn_controller[152787]: 2026-02-01T09:55:41Z|00153|binding|INFO|Setting lport 750c32c9-1ccc-42ba-84bc-e13c95225798 up in Southbound Feb 1 04:55:41 localhost nova_compute[274317]: 2026-02-01 09:55:41.738 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:41 localhost nova_compute[274317]: 2026-02-01 09:55:41.772 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:41.773 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:55:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:41.774 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:55:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:41.774 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:55:41 localhost nova_compute[274317]: 2026-02-01 09:55:41.803 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:41 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:41.856 2 INFO neutron.agent.securitygroups_rpc [None req-739b236d-6306-45a2-92f5-3e504f993767 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:42 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:42.046 2 INFO neutron.agent.securitygroups_rpc [None req-3c4d812b-8cb5-4a19-9709-fc562d1570e9 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.3 KiB/s wr, 35 op/s Feb 1 04:55:42 localhost podman[309534]: Feb 1 04:55:42 localhost podman[309534]: 2026-02-01 09:55:42.576377759 +0000 UTC m=+0.087898064 container create 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Feb 1 04:55:42 localhost systemd[1]: Started libpod-conmon-36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5.scope. Feb 1 04:55:42 localhost podman[309534]: 2026-02-01 09:55:42.534561194 +0000 UTC m=+0.046081539 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:55:42 localhost systemd[1]: Started libcrun container. Feb 1 04:55:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ba26053806f3553eef62c358afce0b2364e6978b21be5bb236b0fdebaf3c20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:55:42 localhost podman[309534]: 2026-02-01 09:55:42.653998135 +0000 UTC m=+0.165518440 container init 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:55:42 localhost podman[309534]: 2026-02-01 09:55:42.664280114 +0000 UTC m=+0.175800419 container start 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 1 04:55:42 localhost dnsmasq[309552]: started, version 2.85 cachesize 150 Feb 1 04:55:42 localhost dnsmasq[309552]: DNS service limited to local subnets Feb 1 04:55:42 localhost dnsmasq[309552]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:55:42 localhost dnsmasq[309552]: warning: no upstream servers configured Feb 1 04:55:42 localhost dnsmasq-dhcp[309552]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 1 04:55:42 localhost dnsmasq[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/addn_hosts - 0 addresses Feb 1 04:55:42 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/host Feb 1 04:55:42 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/opts Feb 1 04:55:42 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:42.720 259225 INFO neutron.agent.dhcp.agent [None req-22ee0ef7-2461-4635-aaab-6107be9deebd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:41Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bf862379-338a-4125-9b52-b08c60b25ce1, ip_allocation=immediate, mac_address=fa:16:3e:61:21:58, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1067819702, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:55:37Z, description=, dns_domain=, id=ae16cdd8-4ef0-4acb-9779-9431fa50e220, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1849249413, port_security_enabled=True, project_id=28f88f2edf4c492c9754208b1c502849, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51522, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1700, status=ACTIVE, subnets=['7b5b389a-883e-47a4-b850-997574034dd2'], tags=[], tenant_id=28f88f2edf4c492c9754208b1c502849, updated_at=2026-02-01T09:55:40Z, vlan_transparent=None, network_id=ae16cdd8-4ef0-4acb-9779-9431fa50e220, port_security_enabled=True, project_id=28f88f2edf4c492c9754208b1c502849, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f98fef45-df22-4656-9ceb-98910abc5fa5'], standard_attr_id=1721, status=DOWN, tags=[], tenant_id=28f88f2edf4c492c9754208b1c502849, updated_at=2026-02-01T09:55:41Z on network ae16cdd8-4ef0-4acb-9779-9431fa50e220#033[00m Feb 1 04:55:42 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:42.770 259225 INFO neutron.agent.dhcp.agent [None req-9b40e706-78fc-4a5e-a99d-f8d1b753b333 - - - - - -] DHCP configuration for ports {'ec2100b4-db1f-4d32-9011-61922e9925f7'} is completed#033[00m Feb 1 04:55:42 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:42.893 2 INFO neutron.agent.securitygroups_rpc [None req-201a996d-ad5d-4320-ba39-e8954e21d5dc 565c83edf044493d9bd1199ad90d627d 28f88f2edf4c492c9754208b1c502849 - - default default] Security group member updated ['f98fef45-df22-4656-9ceb-98910abc5fa5']#033[00m Feb 1 04:55:42 localhost dnsmasq[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/addn_hosts - 1 addresses Feb 1 04:55:42 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/host Feb 1 04:55:42 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/opts Feb 1 04:55:42 localhost podman[309572]: 2026-02-01 09:55:42.912457373 +0000 UTC m=+0.058431612 container kill 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Feb 1 04:55:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:43.054 259225 INFO neutron.agent.dhcp.agent [None req-22ee0ef7-2461-4635-aaab-6107be9deebd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:42Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=9196554d-ed55-40d1-9612-8e12d76f3b7c, ip_allocation=immediate, mac_address=fa:16:3e:2f:f0:08, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1863404296, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:55:37Z, description=, dns_domain=, id=ae16cdd8-4ef0-4acb-9779-9431fa50e220, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1849249413, port_security_enabled=True, project_id=28f88f2edf4c492c9754208b1c502849, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51522, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1700, status=ACTIVE, subnets=['7b5b389a-883e-47a4-b850-997574034dd2'], tags=[], tenant_id=28f88f2edf4c492c9754208b1c502849, updated_at=2026-02-01T09:55:40Z, vlan_transparent=None, network_id=ae16cdd8-4ef0-4acb-9779-9431fa50e220, port_security_enabled=True, project_id=28f88f2edf4c492c9754208b1c502849, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f98fef45-df22-4656-9ceb-98910abc5fa5'], standard_attr_id=1737, status=DOWN, tags=[], tenant_id=28f88f2edf4c492c9754208b1c502849, updated_at=2026-02-01T09:55:42Z on network ae16cdd8-4ef0-4acb-9779-9431fa50e220#033[00m Feb 1 04:55:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:43.072 259225 INFO neutron.agent.linux.dhcp [None req-22ee0ef7-2461-4635-aaab-6107be9deebd - - - - - -] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions#033[00m Feb 1 04:55:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:43.073 259225 INFO neutron.agent.linux.dhcp [None req-22ee0ef7-2461-4635-aaab-6107be9deebd - - - - - -] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions#033[00m Feb 1 04:55:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:43.073 259225 INFO neutron.agent.linux.dhcp [None req-22ee0ef7-2461-4635-aaab-6107be9deebd - - - - - -] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions#033[00m Feb 1 04:55:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:43.127 259225 INFO neutron.agent.dhcp.agent [None req-eb24e3e8-62ca-4c98-9b13-8d750a21ca79 - - - - - -] DHCP configuration for ports {'bf862379-338a-4125-9b52-b08c60b25ce1'} is completed#033[00m Feb 1 04:55:43 localhost dnsmasq[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/addn_hosts - 2 addresses Feb 1 04:55:43 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/host Feb 1 04:55:43 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/opts Feb 1 04:55:43 localhost podman[309613]: 2026-02-01 09:55:43.241082986 +0000 UTC m=+0.055933294 container kill 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 1 04:55:43 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:43.386 2 INFO neutron.agent.securitygroups_rpc [None req-37d62ac1-fed3-4baa-9fc5-73880f6c1760 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:43 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:43.449 2 INFO neutron.agent.securitygroups_rpc [None req-3fe18ee3-04ff-4a52-96de-1f4dd720f476 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:43.507 259225 INFO neutron.agent.dhcp.agent [None req-1b1afa22-388c-4f3a-a459-6c5728e1c876 - - - - - -] DHCP configuration for ports {'9196554d-ed55-40d1-9612-8e12d76f3b7c'} is completed#033[00m Feb 1 04:55:43 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:43.912 2 INFO neutron.agent.securitygroups_rpc [None req-14278f14-d261-4185-9918-5bcd6670f17f 565c83edf044493d9bd1199ad90d627d 28f88f2edf4c492c9754208b1c502849 - - default default] Security group member updated ['f98fef45-df22-4656-9ceb-98910abc5fa5']#033[00m Feb 1 04:55:44 localhost podman[309649]: 2026-02-01 09:55:44.152182566 +0000 UTC m=+0.061576748 container kill 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 1 04:55:44 localhost dnsmasq[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/addn_hosts - 1 addresses Feb 1 04:55:44 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/host Feb 1 04:55:44 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/opts Feb 1 04:55:44 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:44.171 2 INFO neutron.agent.securitygroups_rpc [None req-edf8cab4-6686-4ad5-95d6-283face1fc91 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 3.9 KiB/s rd, 307 B/s wr, 5 op/s Feb 1 04:55:44 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:44.405 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:55:41Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=bf862379-338a-4125-9b52-b08c60b25ce1, ip_allocation=immediate, mac_address=fa:16:3e:61:21:58, name=tempest-new-port-name-875240783, network_id=ae16cdd8-4ef0-4acb-9779-9431fa50e220, port_security_enabled=True, project_id=28f88f2edf4c492c9754208b1c502849, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['f98fef45-df22-4656-9ceb-98910abc5fa5'], standard_attr_id=1721, status=DOWN, tags=[], tenant_id=28f88f2edf4c492c9754208b1c502849, updated_at=2026-02-01T09:55:44Z on network ae16cdd8-4ef0-4acb-9779-9431fa50e220#033[00m Feb 1 04:55:44 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:44.420 259225 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions#033[00m Feb 1 04:55:44 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:44.421 259225 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions#033[00m Feb 1 04:55:44 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:44.421 259225 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions#033[00m Feb 1 04:55:44 localhost dnsmasq[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/addn_hosts - 1 addresses Feb 1 04:55:44 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/host Feb 1 04:55:44 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/opts Feb 1 04:55:44 localhost podman[309687]: 2026-02-01 09:55:44.600401185 +0000 UTC m=+0.065185011 container kill 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 1 04:55:44 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:44.811 2 INFO neutron.agent.securitygroups_rpc [None req-6cb6b6f1-5b7d-4172-a864-3d0396afa917 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:44 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:44.986 259225 INFO neutron.agent.dhcp.agent [None req-ca729419-63e1-4987-ad10-810dca2406a3 - - - - - -] DHCP configuration for ports {'bf862379-338a-4125-9b52-b08c60b25ce1'} is completed#033[00m Feb 1 04:55:45 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:45.045 2 INFO neutron.agent.securitygroups_rpc [None req-889331f7-1b8f-45c7-9568-2acc0f065d63 565c83edf044493d9bd1199ad90d627d 28f88f2edf4c492c9754208b1c502849 - - default default] Security group member updated ['f98fef45-df22-4656-9ceb-98910abc5fa5']#033[00m Feb 1 04:55:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:45 localhost systemd[1]: tmp-crun.oMQ2lI.mount: Deactivated successfully. Feb 1 04:55:45 localhost dnsmasq[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/addn_hosts - 0 addresses Feb 1 04:55:45 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/host Feb 1 04:55:45 localhost podman[309724]: 2026-02-01 09:55:45.244575005 +0000 UTC m=+0.068422782 container kill 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:55:45 localhost dnsmasq-dhcp[309552]: read /var/lib/neutron/dhcp/ae16cdd8-4ef0-4acb-9779-9431fa50e220/opts Feb 1 04:55:45 localhost nova_compute[274317]: 2026-02-01 09:55:45.978 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:46 localhost dnsmasq[309552]: exiting on receipt of SIGTERM Feb 1 04:55:46 localhost podman[309761]: 2026-02-01 09:55:46.111421274 +0000 UTC m=+0.048927607 container kill 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:46 localhost systemd[1]: libpod-36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5.scope: Deactivated successfully. Feb 1 04:55:46 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:46.168 2 INFO neutron.agent.securitygroups_rpc [None req-04f416d8-8fa2-4799-adf0-ff612b0eb9e5 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:46 localhost podman[309775]: 2026-02-01 09:55:46.179317717 +0000 UTC m=+0.051566478 container died 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:55:46 localhost systemd[1]: tmp-crun.PNKpGg.mount: Deactivated successfully. Feb 1 04:55:46 localhost podman[309775]: 2026-02-01 09:55:46.215571741 +0000 UTC m=+0.087820472 container cleanup 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:55:46 localhost systemd[1]: libpod-conmon-36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5.scope: Deactivated successfully. Feb 1 04:55:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 3.9 KiB/s rd, 307 B/s wr, 5 op/s Feb 1 04:55:46 localhost podman[309776]: 2026-02-01 09:55:46.255751766 +0000 UTC m=+0.124138558 container remove 36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ae16cdd8-4ef0-4acb-9779-9431fa50e220, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:55:46 localhost nova_compute[274317]: 2026-02-01 09:55:46.307 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:46 localhost ovn_controller[152787]: 2026-02-01T09:55:46Z|00154|binding|INFO|Releasing lport 750c32c9-1ccc-42ba-84bc-e13c95225798 from this chassis (sb_readonly=0) Feb 1 04:55:46 localhost kernel: device tap750c32c9-1c left promiscuous mode Feb 1 04:55:46 localhost ovn_controller[152787]: 2026-02-01T09:55:46Z|00155|binding|INFO|Setting lport 750c32c9-1ccc-42ba-84bc-e13c95225798 down in Southbound Feb 1 04:55:46 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:46.318 2 INFO neutron.agent.securitygroups_rpc [None req-31a351f9-c21a-4778-8d23-ceef24052c50 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:46 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:46.323 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-ae16cdd8-4ef0-4acb-9779-9431fa50e220', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ae16cdd8-4ef0-4acb-9779-9431fa50e220', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f88f2edf4c492c9754208b1c502849', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea5af0e4-f5ed-413c-862a-945a06818c24, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=750c32c9-1ccc-42ba-84bc-e13c95225798) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:55:46 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:46.324 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 750c32c9-1ccc-42ba-84bc-e13c95225798 in datapath ae16cdd8-4ef0-4acb-9779-9431fa50e220 unbound from our chassis#033[00m Feb 1 04:55:46 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:46.327 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ae16cdd8-4ef0-4acb-9779-9431fa50e220 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:55:46 localhost ovn_metadata_agent[158650]: 2026-02-01 09:55:46.328 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[f4210da8-d272-4342-851e-a09ed3076ef8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:55:46 localhost nova_compute[274317]: 2026-02-01 09:55:46.328 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:46 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:46.359 259225 INFO neutron.agent.dhcp.agent [None req-718b6805-6850-49b5-b355-503fd866ab19 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:46 localhost nova_compute[274317]: 2026-02-01 09:55:46.738 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:46 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:55:46.745 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:55:47 localhost nova_compute[274317]: 2026-02-01 09:55:47.091 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:47 localhost nova_compute[274317]: 2026-02-01 09:55:47.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:47 localhost nova_compute[274317]: 2026-02-01 09:55:47.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:55:47 localhost nova_compute[274317]: 2026-02-01 09:55:47.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:55:47 localhost systemd[1]: var-lib-containers-storage-overlay-e7ba26053806f3553eef62c358afce0b2364e6978b21be5bb236b0fdebaf3c20-merged.mount: Deactivated successfully. Feb 1 04:55:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36dda2713cf22e6c85648690c9fbdbfab9d85ce4c33fddceef5ec94474b864b5-userdata-shm.mount: Deactivated successfully. Feb 1 04:55:47 localhost systemd[1]: run-netns-qdhcp\x2dae16cdd8\x2d4ef0\x2d4acb\x2d9779\x2d9431fa50e220.mount: Deactivated successfully. Feb 1 04:55:47 localhost nova_compute[274317]: 2026-02-01 09:55:47.122 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:55:47 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:47.569 2 INFO neutron.agent.securitygroups_rpc [None req-33ff6921-4704-427a-80ac-43e95e4fc8cf 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:47 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:47.860 2 INFO neutron.agent.securitygroups_rpc [None req-d0c90d7d-7396-4a27-b056-110260352268 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 3.3 KiB/s rd, 263 B/s wr, 4 op/s Feb 1 04:55:48 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:48.671 2 INFO neutron.agent.securitygroups_rpc [None req-08f39597-7736-4b6c-bf06-18c33436307c 6febfd614c0f4e5bbcdad7acfe861496 6419fd8b712b467ea6e03df22d411fcf - - default default] Security group member updated ['4a3b0332-f824-4e4c-b1eb-cf09581851da']#033[00m Feb 1 04:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:55:48 localhost systemd[1]: tmp-crun.HngtPp.mount: Deactivated successfully. Feb 1 04:55:48 localhost podman[309803]: 2026-02-01 09:55:48.872481716 +0000 UTC m=+0.088773632 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible) Feb 1 04:55:48 localhost podman[309804]: 2026-02-01 09:55:48.917102448 +0000 UTC m=+0.129152742 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:55:48 localhost podman[309803]: 2026-02-01 09:55:48.935710944 +0000 UTC m=+0.152002840 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:55:48 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:55:48 localhost podman[309804]: 2026-02-01 09:55:48.952234196 +0000 UTC m=+0.164284460 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:55:48 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:55:50 localhost nova_compute[274317]: 2026-02-01 09:55:50.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:50 localhost nova_compute[274317]: 2026-02-01 09:55:50.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:50 localhost nova_compute[274317]: 2026-02-01 09:55:50.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 1 04:55:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:51 localhost nova_compute[274317]: 2026-02-01 09:55:51.012 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:51 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:51.315 2 INFO neutron.agent.securitygroups_rpc [None req-a98fbbc3-cb77-458d-bae0-4950f59446e4 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:55:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:55:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:55:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:55:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:55:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:55:51 localhost nova_compute[274317]: 2026-02-01 09:55:51.741 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:52 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:52.140 2 INFO neutron.agent.securitygroups_rpc [None req-0dd8b6b5-2ecb-4256-a677-3e4c95ec3623 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.259 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.260 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.260 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.282 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.282 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.282 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.283 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.283 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:55:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:55:52 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1153971238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.748 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:55:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:55:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:55:52 localhost podman[309871]: 2026-02-01 09:55:52.864037814 +0000 UTC m=+0.079311259 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, build-date=2026-01-22T05:09:47Z, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, release=1769056855, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=9.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git) Feb 1 04:55:52 localhost podman[309871]: 2026-02-01 09:55:52.874666723 +0000 UTC m=+0.089940188 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, architecture=x86_64, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter) Feb 1 04:55:52 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:55:52 localhost podman[309872]: 2026-02-01 09:55:52.876473349 +0000 UTC m=+0.088220194 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.933 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.934 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11650MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.935 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:55:52 localhost nova_compute[274317]: 2026-02-01 09:55:52.935 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:55:52 localhost podman[309872]: 2026-02-01 09:55:52.956699395 +0000 UTC m=+0.168446220 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:55:52 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:55:53 localhost nova_compute[274317]: 2026-02-01 09:55:53.224 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:55:53 localhost nova_compute[274317]: 2026-02-01 09:55:53.225 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:55:53 localhost nova_compute[274317]: 2026-02-01 09:55:53.483 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:55:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:55:53 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/382175384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:55:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:55:53 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2705022701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:55:53 localhost nova_compute[274317]: 2026-02-01 09:55:53.930 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:55:53 localhost nova_compute[274317]: 2026-02-01 09:55:53.936 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:55:53 localhost nova_compute[274317]: 2026-02-01 09:55:53.957 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:55:53 localhost nova_compute[274317]: 2026-02-01 09:55:53.985 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:55:53 localhost nova_compute[274317]: 2026-02-01 09:55:53.985 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:55:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:54 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:54.442 2 INFO neutron.agent.securitygroups_rpc [None req-e7867791-cd65-411f-8ed6-a1d97d2d0b42 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:54 localhost nova_compute[274317]: 2026-02-01 09:55:54.826 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:54 localhost nova_compute[274317]: 2026-02-01 09:55:54.826 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:54 localhost nova_compute[274317]: 2026-02-01 09:55:54.827 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:54 localhost nova_compute[274317]: 2026-02-01 09:55:54.827 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:55:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:55:55 localhost neutron_sriov_agent[252054]: 2026-02-01 09:55:55.811 2 INFO neutron.agent.securitygroups_rpc [None req-58b746b2-1860-41ad-b399-3d8dcfe6ba21 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:55:56 localhost nova_compute[274317]: 2026-02-01 09:55:56.069 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:56 localhost nova_compute[274317]: 2026-02-01 09:55:56.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail Feb 1 04:55:56 localhost nova_compute[274317]: 2026-02-01 09:55:56.742 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:55:57 localhost nova_compute[274317]: 2026-02-01 09:55:57.117 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:58 localhost nova_compute[274317]: 2026-02-01 09:55:58.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:55:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s Feb 1 04:56:00 localhost podman[236852]: time="2026-02-01T09:56:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:56:00 localhost podman[236852]: @ - - [01/Feb/2026:09:56:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 04:56:00 localhost podman[236852]: @ - - [01/Feb/2026:09:56:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18302 "" "Go-http-client/1.1" Feb 1 04:56:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s Feb 1 04:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:56:00 localhost podman[309930]: 2026-02-01 09:56:00.854391476 +0000 UTC m=+0.069963559 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ceilometer_agent_compute) Feb 1 04:56:00 localhost podman[309930]: 2026-02-01 09:56:00.868700569 +0000 UTC m=+0.084272722 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 1 04:56:00 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:56:01 localhost nova_compute[274317]: 2026-02-01 09:56:01.070 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e130 e130: 6 total, 6 up, 6 in Feb 1 04:56:01 localhost openstack_network_exporter[239388]: ERROR 09:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:56:01 localhost openstack_network_exporter[239388]: Feb 1 04:56:01 localhost openstack_network_exporter[239388]: ERROR 09:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:56:01 localhost openstack_network_exporter[239388]: Feb 1 04:56:01 localhost nova_compute[274317]: 2026-02-01 09:56:01.747 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s Feb 1 04:56:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e131 e131: 6 total, 6 up, 6 in Feb 1 04:56:02 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:02.680 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:03 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:03.364 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:03 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e132 e132: 6 total, 6 up, 6 in Feb 1 04:56:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:56:03 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:03.824 259225 INFO neutron.agent.linux.ip_lib [None req-3140bc6e-3c00-4049-bd5d-7eceb2ee1ff1 - - - - - -] Device tap71466265-5f cannot be used as it has no MAC address#033[00m Feb 1 04:56:03 localhost podman[309950]: 2026-02-01 09:56:03.847822748 +0000 UTC m=+0.094376965 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:56:03 localhost nova_compute[274317]: 2026-02-01 09:56:03.857 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:03 localhost podman[309950]: 2026-02-01 09:56:03.858475648 +0000 UTC m=+0.105029835 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:56:03 localhost kernel: device tap71466265-5f entered promiscuous mode Feb 1 04:56:03 localhost nova_compute[274317]: 2026-02-01 09:56:03.865 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:03 localhost ovn_controller[152787]: 2026-02-01T09:56:03Z|00156|binding|INFO|Claiming lport 71466265-5f83-483a-a896-41c28a392e73 for this chassis. Feb 1 04:56:03 localhost ovn_controller[152787]: 2026-02-01T09:56:03Z|00157|binding|INFO|71466265-5f83-483a-a896-41c28a392e73: Claiming unknown Feb 1 04:56:03 localhost NetworkManager[5972]: [1769939763.8683] manager: (tap71466265-5f): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Feb 1 04:56:03 localhost systemd-udevd[309981]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:56:03 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:03.877 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-6d00e50d-ad20-4c3b-83fb-c0f039efd634', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00e50d-ad20-4c3b-83fb-c0f039efd634', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '904cc8942364443bb4c4a4017bb1e647', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd93c82b-505e-4fa1-935a-07bfb46ac2bf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=71466265-5f83-483a-a896-41c28a392e73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:03 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:03.879 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 71466265-5f83-483a-a896-41c28a392e73 in datapath 6d00e50d-ad20-4c3b-83fb-c0f039efd634 bound to our chassis#033[00m Feb 1 04:56:03 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:56:03 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:03.883 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6d00e50d-ad20-4c3b-83fb-c0f039efd634 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:56:03 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:03.884 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[f6030755-8968-4660-a849-553f20ca8d00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:03 localhost journal[224955]: ethtool ioctl error on tap71466265-5f: No such device Feb 1 04:56:03 localhost ovn_controller[152787]: 2026-02-01T09:56:03Z|00158|binding|INFO|Setting lport 71466265-5f83-483a-a896-41c28a392e73 ovn-installed in OVS Feb 1 04:56:03 localhost ovn_controller[152787]: 2026-02-01T09:56:03Z|00159|binding|INFO|Setting lport 71466265-5f83-483a-a896-41c28a392e73 up in Southbound Feb 1 04:56:03 localhost journal[224955]: ethtool ioctl error on tap71466265-5f: No such device Feb 1 04:56:03 localhost nova_compute[274317]: 2026-02-01 09:56:03.904 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:03 localhost journal[224955]: ethtool ioctl error on tap71466265-5f: No such device Feb 1 04:56:03 localhost journal[224955]: ethtool ioctl error on tap71466265-5f: No such device Feb 1 04:56:03 localhost journal[224955]: ethtool ioctl error on tap71466265-5f: No such device Feb 1 04:56:03 localhost journal[224955]: ethtool ioctl error on tap71466265-5f: No such device Feb 1 04:56:03 localhost journal[224955]: ethtool ioctl error on tap71466265-5f: No such device Feb 1 04:56:03 localhost journal[224955]: ethtool ioctl error on tap71466265-5f: No such device Feb 1 04:56:03 localhost nova_compute[274317]: 2026-02-01 09:56:03.939 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:03 localhost nova_compute[274317]: 2026-02-01 09:56:03.966 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 4.0 KiB/s rd, 511 B/s wr, 5 op/s Feb 1 04:56:04 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e133 e133: 6 total, 6 up, 6 in Feb 1 04:56:04 localhost podman[310052]: Feb 1 04:56:04 localhost podman[310052]: 2026-02-01 09:56:04.782727666 +0000 UTC m=+0.082834217 container create cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 1 04:56:04 localhost systemd[1]: Started libpod-conmon-cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69.scope. Feb 1 04:56:04 localhost systemd[1]: Started libcrun container. Feb 1 04:56:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080106a4e4259907a6b8268b5326d948d1c07084ef37858b7872beeabb761334/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:56:04 localhost podman[310052]: 2026-02-01 09:56:04.743227722 +0000 UTC m=+0.043334273 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:56:04 localhost podman[310052]: 2026-02-01 09:56:04.846834272 +0000 UTC m=+0.146940823 container init cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 1 04:56:04 localhost podman[310052]: 2026-02-01 09:56:04.854878572 +0000 UTC m=+0.154985113 container start cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:56:04 localhost dnsmasq[310071]: started, version 2.85 cachesize 150 Feb 1 04:56:04 localhost dnsmasq[310071]: DNS service limited to local subnets Feb 1 04:56:04 localhost dnsmasq[310071]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:56:04 localhost dnsmasq[310071]: warning: no upstream servers configured Feb 1 04:56:04 localhost dnsmasq-dhcp[310071]: DHCPv6, static leases only on 2001:db8:2::, lease time 1d Feb 1 04:56:04 localhost dnsmasq[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/addn_hosts - 0 addresses Feb 1 04:56:04 localhost dnsmasq-dhcp[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/host Feb 1 04:56:04 localhost dnsmasq-dhcp[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/opts Feb 1 04:56:04 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:04.918 259225 INFO neutron.agent.dhcp.agent [None req-3140bc6e-3c00-4049-bd5d-7eceb2ee1ff1 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:56:03Z, description=, device_id=ca4d5fd2-fcc5-4bbf-84e5-6e063f1f23d4, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=87ed5c48-18a2-4a05-820b-da5952a8289d, ip_allocation=immediate, mac_address=fa:16:3e:33:ce:c8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:55:59Z, description=, dns_domain=, id=6d00e50d-ad20-4c3b-83fb-c0f039efd634, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-352168199, port_security_enabled=True, project_id=904cc8942364443bb4c4a4017bb1e647, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21257, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1857, status=ACTIVE, subnets=['88123a8b-9c30-4a59-b1d6-4fd658119a87'], tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:56:01Z, vlan_transparent=None, network_id=6d00e50d-ad20-4c3b-83fb-c0f039efd634, port_security_enabled=False, project_id=904cc8942364443bb4c4a4017bb1e647, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1876, status=DOWN, tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:56:03Z on network 6d00e50d-ad20-4c3b-83fb-c0f039efd634#033[00m Feb 1 04:56:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:05.095 259225 INFO neutron.agent.dhcp.agent [None req-312d8b52-72a4-4c64-9c9e-81992ee8e002 - - - - - -] DHCP configuration for ports {'915d9796-5daf-41ca-ab93-9109392896ab'} is completed#033[00m Feb 1 04:56:05 localhost dnsmasq[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/addn_hosts - 1 addresses Feb 1 04:56:05 localhost podman[310088]: 2026-02-01 09:56:05.0978241 +0000 UTC m=+0.056443990 container kill cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 04:56:05 localhost dnsmasq-dhcp[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/host Feb 1 04:56:05 localhost dnsmasq-dhcp[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/opts Feb 1 04:56:05 localhost nova_compute[274317]: 2026-02-01 09:56:05.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:05 localhost nova_compute[274317]: 2026-02-01 09:56:05.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 1 04:56:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:05 localhost nova_compute[274317]: 2026-02-01 09:56:05.129 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 1 04:56:05 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:05.237 2 INFO neutron.agent.securitygroups_rpc [None req-33d595e3-b3a6-4bc9-b70b-e120045130a2 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:56:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:05.340 259225 INFO neutron.agent.dhcp.agent [None req-9c5e2d71-9c53-492b-925b-55c362b4aa11 - - - - - -] DHCP configuration for ports {'87ed5c48-18a2-4a05-820b-da5952a8289d'} is completed#033[00m Feb 1 04:56:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:05.438 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:56:03Z, description=, device_id=ca4d5fd2-fcc5-4bbf-84e5-6e063f1f23d4, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=87ed5c48-18a2-4a05-820b-da5952a8289d, ip_allocation=immediate, mac_address=fa:16:3e:33:ce:c8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:55:59Z, description=, dns_domain=, id=6d00e50d-ad20-4c3b-83fb-c0f039efd634, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-352168199, port_security_enabled=True, project_id=904cc8942364443bb4c4a4017bb1e647, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21257, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1857, status=ACTIVE, subnets=['88123a8b-9c30-4a59-b1d6-4fd658119a87'], tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:56:01Z, vlan_transparent=None, network_id=6d00e50d-ad20-4c3b-83fb-c0f039efd634, port_security_enabled=False, project_id=904cc8942364443bb4c4a4017bb1e647, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1876, status=DOWN, tags=[], tenant_id=904cc8942364443bb4c4a4017bb1e647, updated_at=2026-02-01T09:56:03Z on network 6d00e50d-ad20-4c3b-83fb-c0f039efd634#033[00m Feb 1 04:56:05 localhost dnsmasq[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/addn_hosts - 1 addresses Feb 1 04:56:05 localhost dnsmasq-dhcp[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/host Feb 1 04:56:05 localhost dnsmasq-dhcp[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/opts Feb 1 04:56:05 localhost podman[310126]: 2026-02-01 09:56:05.622657841 +0000 UTC m=+0.057748170 container kill cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:56:05 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:05.897 2 INFO neutron.agent.securitygroups_rpc [None req-4789720c-93ff-4d5d-a9e8-dc630a3e4cba 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:56:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:05.901 259225 INFO neutron.agent.dhcp.agent [None req-0067f83a-94ec-4f98-99b3-9b304250a23a - - - - - -] DHCP configuration for ports {'87ed5c48-18a2-4a05-820b-da5952a8289d'} is completed#033[00m Feb 1 04:56:05 localhost nova_compute[274317]: 2026-02-01 09:56:05.983 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:06 localhost nova_compute[274317]: 2026-02-01 09:56:06.072 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 4.9 KiB/s rd, 626 B/s wr, 6 op/s Feb 1 04:56:06 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:06.527 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:06 localhost nova_compute[274317]: 2026-02-01 09:56:06.748 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 4.2 KiB/s wr, 60 op/s Feb 1 04:56:08 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:56:08 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2310334247' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:56:09 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e134 e134: 6 total, 6 up, 6 in Feb 1 04:56:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 3.2 KiB/s wr, 48 op/s Feb 1 04:56:10 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:10.913 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:11 localhost nova_compute[274317]: 2026-02-01 09:56:11.073 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:11 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e135 e135: 6 total, 6 up, 6 in Feb 1 04:56:11 localhost nova_compute[274317]: 2026-02-01 09:56:11.749 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 2.8 KiB/s wr, 42 op/s Feb 1 04:56:12 localhost nova_compute[274317]: 2026-02-01 09:56:12.600 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:13 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e136 e136: 6 total, 6 up, 6 in Feb 1 04:56:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 2.8 KiB/s wr, 112 op/s Feb 1 04:56:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e137 e137: 6 total, 6 up, 6 in Feb 1 04:56:16 localhost nova_compute[274317]: 2026-02-01 09:56:16.075 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 2.8 KiB/s wr, 112 op/s Feb 1 04:56:16 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:16.400 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:56:16 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/455368239' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:56:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:56:16 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/455368239' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:56:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e138 e138: 6 total, 6 up, 6 in Feb 1 04:56:16 localhost nova_compute[274317]: 2026-02-01 09:56:16.759 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 145 MiB data, 764 MiB used, 41 GiB / 42 GiB avail; 140 KiB/s rd, 5.2 KiB/s wr, 186 op/s Feb 1 04:56:18 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:18.679 2 INFO neutron.agent.securitygroups_rpc [None req-f9943192-ce60-4425-aa06-00cabb160f7d 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:56:18 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:18.973 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.2 2001:db8::f816:3eff:fe29:3cbc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6df31c80-e655-4133-9c32-9708470a03c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4bc4012-7c81-4a7f-9a67-f9545d549873) old=Port_Binding(mac=['fa:16:3e:29:3c:bc 2001:db8::f816:3eff:fe29:3cbc'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:18 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:18.975 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4bc4012-7c81-4a7f-9a67-f9545d549873 in datapath cba39058-6a05-4f77-add1-57334b728a66 updated#033[00m Feb 1 04:56:18 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:18.978 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cba39058-6a05-4f77-add1-57334b728a66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:56:18 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:18.979 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[db80b759-b6b7-45d4-a231-472d5477d428]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:56:19 localhost podman[310148]: 2026-02-01 09:56:19.868009005 +0000 UTC m=+0.082726874 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:56:19 localhost podman[310148]: 2026-02-01 09:56:19.876762656 +0000 UTC m=+0.091480515 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:56:19 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:56:19 localhost podman[310147]: 2026-02-01 09:56:19.918485899 +0000 UTC m=+0.134269372 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:56:19 localhost podman[310147]: 2026-02-01 09:56:19.954238507 +0000 UTC m=+0.170022010 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:56:19 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:56:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 764 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 2.1 KiB/s wr, 67 op/s Feb 1 04:56:21 localhost nova_compute[274317]: 2026-02-01 09:56:21.114 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:56:21 Feb 1 04:56:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:56:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:56:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['images', 'manila_metadata', '.mgr', 'backups', 'vms', 'manila_data', 'volumes'] Feb 1 04:56:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:56:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e139 e139: 6 total, 6 up, 6 in Feb 1 04:56:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:56:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:56:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:56:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:56:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:56:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:56:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002170138888888889 quantized to 16 (current 16) Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:56:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:56:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e140 e140: 6 total, 6 up, 6 in Feb 1 04:56:21 localhost nova_compute[274317]: 2026-02-01 09:56:21.763 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:21 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:21.974 259225 INFO neutron.agent.linux.ip_lib [None req-6674391d-ef0d-468c-9ac3-43050e859039 - - - - - -] Device tap89a67c8f-ab cannot be used as it has no MAC address#033[00m Feb 1 04:56:21 localhost nova_compute[274317]: 2026-02-01 09:56:21.994 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:21 localhost kernel: device tap89a67c8f-ab entered promiscuous mode Feb 1 04:56:22 localhost nova_compute[274317]: 2026-02-01 09:56:22.000 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:22 localhost ovn_controller[152787]: 2026-02-01T09:56:22Z|00160|binding|INFO|Claiming lport 89a67c8f-abeb-44ba-987b-710ed5812b98 for this chassis. Feb 1 04:56:22 localhost ovn_controller[152787]: 2026-02-01T09:56:22Z|00161|binding|INFO|89a67c8f-abeb-44ba-987b-710ed5812b98: Claiming unknown Feb 1 04:56:22 localhost NetworkManager[5972]: [1769939782.0025] manager: (tap89a67c8f-ab): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Feb 1 04:56:22 localhost systemd-udevd[310203]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:56:22 localhost journal[224955]: ethtool ioctl error on tap89a67c8f-ab: No such device Feb 1 04:56:22 localhost ovn_controller[152787]: 2026-02-01T09:56:22Z|00162|binding|INFO|Setting lport 89a67c8f-abeb-44ba-987b-710ed5812b98 ovn-installed in OVS Feb 1 04:56:22 localhost nova_compute[274317]: 2026-02-01 09:56:22.037 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:22 localhost journal[224955]: ethtool ioctl error on tap89a67c8f-ab: No such device Feb 1 04:56:22 localhost nova_compute[274317]: 2026-02-01 09:56:22.040 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:22 localhost journal[224955]: ethtool ioctl error on tap89a67c8f-ab: No such device Feb 1 04:56:22 localhost journal[224955]: ethtool ioctl error on tap89a67c8f-ab: No such device Feb 1 04:56:22 localhost journal[224955]: ethtool ioctl error on tap89a67c8f-ab: No such device Feb 1 04:56:22 localhost journal[224955]: ethtool ioctl error on tap89a67c8f-ab: No such device Feb 1 04:56:22 localhost journal[224955]: ethtool ioctl error on tap89a67c8f-ab: No such device Feb 1 04:56:22 localhost journal[224955]: ethtool ioctl error on tap89a67c8f-ab: No such device Feb 1 04:56:22 localhost nova_compute[274317]: 2026-02-01 09:56:22.074 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:22 localhost nova_compute[274317]: 2026-02-01 09:56:22.101 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:22 localhost ovn_controller[152787]: 2026-02-01T09:56:22Z|00163|binding|INFO|Setting lport 89a67c8f-abeb-44ba-987b-710ed5812b98 up in Southbound Feb 1 04:56:22 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:22.117 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-c0dbb3ef-d632-48b0-b256-d985cf33ea92', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0dbb3ef-d632-48b0-b256-d985cf33ea92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d70c431093044779c88823510311e1a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7955782-fea5-4e19-bc74-89fb26d9b2eb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=89a67c8f-abeb-44ba-987b-710ed5812b98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:22 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:22.119 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 89a67c8f-abeb-44ba-987b-710ed5812b98 in datapath c0dbb3ef-d632-48b0-b256-d985cf33ea92 bound to our chassis#033[00m Feb 1 04:56:22 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:22.121 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c0dbb3ef-d632-48b0-b256-d985cf33ea92 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:56:22 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:22.122 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[a0398152-1a9d-4763-8df3-431b02624317]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 764 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 2.3 KiB/s wr, 73 op/s Feb 1 04:56:22 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:22.296 2 INFO neutron.agent.securitygroups_rpc [None req-a183bb9b-36ef-42c1-85bf-6ec8456cdf42 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:56:22 localhost podman[310274]: Feb 1 04:56:22 localhost podman[310274]: 2026-02-01 09:56:22.917119162 +0000 UTC m=+0.092765266 container create d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0dbb3ef-d632-48b0-b256-d985cf33ea92, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2) Feb 1 04:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:56:22 localhost systemd[1]: Started libpod-conmon-d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61.scope. Feb 1 04:56:22 localhost podman[310274]: 2026-02-01 09:56:22.868115353 +0000 UTC m=+0.043761467 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:56:22 localhost systemd[1]: tmp-crun.yanSzL.mount: Deactivated successfully. Feb 1 04:56:22 localhost systemd[1]: Started libcrun container. Feb 1 04:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:56:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede7d4779104218ea621e19c563a4f5da37bdff6220daf39da5c830ef37d9d02/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:56:23 localhost podman[310274]: 2026-02-01 09:56:23.002484877 +0000 UTC m=+0.178130961 container init d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0dbb3ef-d632-48b0-b256-d985cf33ea92, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true) Feb 1 04:56:23 localhost dnsmasq[310310]: started, version 2.85 cachesize 150 Feb 1 04:56:23 localhost dnsmasq[310310]: DNS service limited to local subnets Feb 1 04:56:23 localhost dnsmasq[310310]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:56:23 localhost dnsmasq[310310]: warning: no upstream servers configured Feb 1 04:56:23 localhost dnsmasq-dhcp[310310]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 1 04:56:23 localhost dnsmasq[310310]: read /var/lib/neutron/dhcp/c0dbb3ef-d632-48b0-b256-d985cf33ea92/addn_hosts - 0 addresses Feb 1 04:56:23 localhost dnsmasq-dhcp[310310]: read /var/lib/neutron/dhcp/c0dbb3ef-d632-48b0-b256-d985cf33ea92/host Feb 1 04:56:23 localhost dnsmasq-dhcp[310310]: read /var/lib/neutron/dhcp/c0dbb3ef-d632-48b0-b256-d985cf33ea92/opts Feb 1 04:56:23 localhost podman[310288]: 2026-02-01 09:56:23.04646148 +0000 UTC m=+0.088750801 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, architecture=x86_64, version=9.7, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 04:56:23 localhost podman[310274]: 2026-02-01 09:56:23.061927388 +0000 UTC m=+0.237573472 container start d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0dbb3ef-d632-48b0-b256-d985cf33ea92, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:56:23 localhost podman[310288]: 2026-02-01 09:56:23.087688066 +0000 UTC m=+0.129977397 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.7, release=1769056855, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc.) Feb 1 04:56:23 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:56:23 localhost podman[310302]: 2026-02-01 09:56:23.131969099 +0000 UTC m=+0.129975718 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:56:23 localhost podman[310302]: 2026-02-01 09:56:23.165986422 +0000 UTC m=+0.163993061 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Feb 1 04:56:23 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:56:23 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:23.444 259225 INFO neutron.agent.dhcp.agent [None req-aefb7c01-4e9b-4934-bf70-fd73a85dda45 - - - - - -] DHCP configuration for ports {'3ffbce15-2efc-45cc-abbd-0d8f25ff7bdf'} is completed#033[00m Feb 1 04:56:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e141 e141: 6 total, 6 up, 6 in Feb 1 04:56:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 2.7 KiB/s wr, 31 op/s Feb 1 04:56:25 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:25.103 2 INFO neutron.agent.securitygroups_rpc [None req-f4397a72-704c-448d-a51f-22a40616a177 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:56:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:25 localhost nova_compute[274317]: 2026-02-01 09:56:25.271 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e142 e142: 6 total, 6 up, 6 in Feb 1 04:56:26 localhost nova_compute[274317]: 2026-02-01 09:56:26.116 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 3.3 KiB/s wr, 38 op/s Feb 1 04:56:26 localhost nova_compute[274317]: 2026-02-01 09:56:26.765 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:26 localhost dnsmasq[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/addn_hosts - 0 addresses Feb 1 04:56:26 localhost dnsmasq-dhcp[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/host Feb 1 04:56:26 localhost dnsmasq-dhcp[310071]: read /var/lib/neutron/dhcp/6d00e50d-ad20-4c3b-83fb-c0f039efd634/opts Feb 1 04:56:26 localhost podman[310347]: 2026-02-01 09:56:26.89481086 +0000 UTC m=+0.060179225 container kill cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 1 04:56:27 localhost nova_compute[274317]: 2026-02-01 09:56:27.342 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:27 localhost ovn_controller[152787]: 2026-02-01T09:56:27Z|00164|binding|INFO|Releasing lport 71466265-5f83-483a-a896-41c28a392e73 from this chassis (sb_readonly=0) Feb 1 04:56:27 localhost kernel: device tap71466265-5f left promiscuous mode Feb 1 04:56:27 localhost ovn_controller[152787]: 2026-02-01T09:56:27Z|00165|binding|INFO|Setting lport 71466265-5f83-483a-a896-41c28a392e73 down in Southbound Feb 1 04:56:27 localhost nova_compute[274317]: 2026-02-01 09:56:27.358 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:27.483 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-6d00e50d-ad20-4c3b-83fb-c0f039efd634', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00e50d-ad20-4c3b-83fb-c0f039efd634', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '904cc8942364443bb4c4a4017bb1e647', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd93c82b-505e-4fa1-935a-07bfb46ac2bf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=71466265-5f83-483a-a896-41c28a392e73) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:27.485 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 71466265-5f83-483a-a896-41c28a392e73 in datapath 6d00e50d-ad20-4c3b-83fb-c0f039efd634 unbound from our chassis#033[00m Feb 1 04:56:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:27.487 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6d00e50d-ad20-4c3b-83fb-c0f039efd634 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:56:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:27.488 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[6cae717b-6b33-42c9-9dc1-ccb3b4692c79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 4.5 KiB/s wr, 95 op/s Feb 1 04:56:28 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:28.356 2 INFO neutron.agent.securitygroups_rpc [None req-abdb9ca6-56bb-47f8-92cb-3bfb04a52114 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:56:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e143 e143: 6 total, 6 up, 6 in Feb 1 04:56:29 localhost dnsmasq[310071]: exiting on receipt of SIGTERM Feb 1 04:56:29 localhost podman[310387]: 2026-02-01 09:56:29.797446869 +0000 UTC m=+0.063611622 container kill cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 1 04:56:29 localhost systemd[1]: libpod-cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69.scope: Deactivated successfully. Feb 1 04:56:29 localhost podman[310402]: 2026-02-01 09:56:29.871788792 +0000 UTC m=+0.060114223 container died cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 1 04:56:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69-userdata-shm.mount: Deactivated successfully. Feb 1 04:56:29 localhost podman[310402]: 2026-02-01 09:56:29.899451319 +0000 UTC m=+0.087776710 container cleanup cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:56:29 localhost systemd[1]: libpod-conmon-cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69.scope: Deactivated successfully. Feb 1 04:56:29 localhost podman[310407]: 2026-02-01 09:56:29.953402501 +0000 UTC m=+0.126826381 container remove cbf4b698789784ecb7f952b9bb712e1282ae0a87e7cfdf356f96e5df169dda69 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d00e50d-ad20-4c3b-83fb-c0f039efd634, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:56:30 localhost podman[236852]: time="2026-02-01T09:56:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:56:30 localhost podman[236852]: @ - - [01/Feb/2026:09:56:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157173 "" "Go-http-client/1.1" Feb 1 04:56:30 localhost podman[236852]: @ - - [01/Feb/2026:09:56:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18766 "" "Go-http-client/1.1" Feb 1 04:56:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:30 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:30.152 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:3c:bc 2001:db8::f816:3eff:fe29:3cbc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6df31c80-e655-4133-9c32-9708470a03c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4bc4012-7c81-4a7f-9a67-f9545d549873) old=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.2 2001:db8::f816:3eff:fe29:3cbc'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:30 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:30.154 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4bc4012-7c81-4a7f-9a67-f9545d549873 in datapath cba39058-6a05-4f77-add1-57334b728a66 updated#033[00m Feb 1 04:56:30 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:30.157 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cba39058-6a05-4f77-add1-57334b728a66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:56:30 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:30.159 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[6d38de4a-12c0-46ea-a6d9-d2d8d843a69a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 2.1 KiB/s wr, 67 op/s Feb 1 04:56:30 localhost systemd[1]: var-lib-containers-storage-overlay-080106a4e4259907a6b8268b5326d948d1c07084ef37858b7872beeabb761334-merged.mount: Deactivated successfully. Feb 1 04:56:31 localhost nova_compute[274317]: 2026-02-01 09:56:31.038 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:31 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:31.039 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:31 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:31.042 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:56:31 localhost nova_compute[274317]: 2026-02-01 09:56:31.118 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:31 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:31.176 259225 INFO neutron.agent.dhcp.agent [None req-30ec3d39-beaa-4a68-857b-7802b4190f44 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:31 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:31.178 259225 INFO neutron.agent.dhcp.agent [None req-30ec3d39-beaa-4a68-857b-7802b4190f44 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:31 localhost systemd[1]: run-netns-qdhcp\x2d6d00e50d\x2dad20\x2d4c3b\x2d83fb\x2dc0f039efd634.mount: Deactivated successfully. Feb 1 04:56:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:56:31 localhost podman[310431]: 2026-02-01 09:56:31.286722124 +0000 UTC m=+0.083863560 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:56:31 localhost podman[310431]: 2026-02-01 09:56:31.302185663 +0000 UTC m=+0.099327099 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute) Feb 1 04:56:31 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:56:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e144 e144: 6 total, 6 up, 6 in Feb 1 04:56:31 localhost openstack_network_exporter[239388]: ERROR 09:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:56:31 localhost openstack_network_exporter[239388]: Feb 1 04:56:31 localhost openstack_network_exporter[239388]: ERROR 09:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:56:31 localhost openstack_network_exporter[239388]: Feb 1 04:56:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e145 e145: 6 total, 6 up, 6 in Feb 1 04:56:31 localhost nova_compute[274317]: 2026-02-01 09:56:31.798 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 2.3 KiB/s wr, 73 op/s Feb 1 04:56:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 7.2 KiB/s wr, 78 op/s Feb 1 04:56:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:56:34 localhost podman[310448]: 2026-02-01 09:56:34.85319514 +0000 UTC m=+0.074924592 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:56:34 localhost podman[310448]: 2026-02-01 09:56:34.862090875 +0000 UTC m=+0.083820337 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:56:34 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:56:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e146 e146: 6 total, 6 up, 6 in Feb 1 04:56:35 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:35.891 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:36 localhost nova_compute[274317]: 2026-02-01 09:56:36.121 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 7.2 KiB/s wr, 78 op/s Feb 1 04:56:36 localhost nova_compute[274317]: 2026-02-01 09:56:36.847 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:36 localhost nova_compute[274317]: 2026-02-01 09:56:36.863 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:37 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:37.020 2 INFO neutron.agent.securitygroups_rpc [None req-ea765bd5-9cd7-4b20-a560-8c3da0273449 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:56:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:37.044 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:56:37 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:56:37 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:56:37 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:56:37 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:56:37 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:56:37 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev ed7849d2-2d73-42b0-82c7-88ba13ad4b5f (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:56:37 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev ed7849d2-2d73-42b0-82c7-88ba13ad4b5f (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:56:37 localhost ceph-mgr[278126]: [progress INFO root] Completed event ed7849d2-2d73-42b0-82c7-88ba13ad4b5f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:56:37 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:56:37 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:56:37 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:56:37 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:56:37 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e147 e147: 6 total, 6 up, 6 in Feb 1 04:56:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 11 KiB/s wr, 110 op/s Feb 1 04:56:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e148 e148: 6 total, 6 up, 6 in Feb 1 04:56:38 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:38.630 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.2 2001:db8::f816:3eff:fe29:3cbc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6df31c80-e655-4133-9c32-9708470a03c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4bc4012-7c81-4a7f-9a67-f9545d549873) old=Port_Binding(mac=['fa:16:3e:29:3c:bc 2001:db8::f816:3eff:fe29:3cbc'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:38 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:38.633 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4bc4012-7c81-4a7f-9a67-f9545d549873 in datapath cba39058-6a05-4f77-add1-57334b728a66 updated#033[00m Feb 1 04:56:38 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:38.636 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cba39058-6a05-4f77-add1-57334b728a66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:56:38 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:38.637 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[c8464432-c746-4302-8a61-ee43795e0c12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 4.8 KiB/s wr, 42 op/s Feb 1 04:56:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e149 e149: 6 total, 6 up, 6 in Feb 1 04:56:41 localhost nova_compute[274317]: 2026-02-01 09:56:41.124 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:41 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:41.535 2 INFO neutron.agent.securitygroups_rpc [None req-fb00d927-fa6d-4c8a-857b-3eb5803ded56 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:56:41 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:56:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:56:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:41.774 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:56:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:41.774 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:56:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:41.774 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:56:41 localhost nova_compute[274317]: 2026-02-01 09:56:41.889 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 4.8 KiB/s wr, 42 op/s Feb 1 04:56:42 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:42.589 2 INFO neutron.agent.securitygroups_rpc [None req-868efe44-cb2f-4cd4-8b32-db45306b68ea 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:56:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:56:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e150 e150: 6 total, 6 up, 6 in Feb 1 04:56:43 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:43.577 2 INFO neutron.agent.securitygroups_rpc [None req-2270150a-743e-4e11-8e45-671bacf25871 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:56:43 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e151 e151: 6 total, 6 up, 6 in Feb 1 04:56:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 9.3 KiB/s wr, 104 op/s Feb 1 04:56:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e152 e152: 6 total, 6 up, 6 in Feb 1 04:56:46 localhost nova_compute[274317]: 2026-02-01 09:56:46.125 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 9.3 KiB/s wr, 104 op/s Feb 1 04:56:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e153 e153: 6 total, 6 up, 6 in Feb 1 04:56:46 localhost nova_compute[274317]: 2026-02-01 09:56:46.926 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:47 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e154 e154: 6 total, 6 up, 6 in Feb 1 04:56:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 146 KiB/s rd, 11 KiB/s wr, 200 op/s Feb 1 04:56:48 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:48.539 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6df31c80-e655-4133-9c32-9708470a03c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4bc4012-7c81-4a7f-9a67-f9545d549873) old=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.2 2001:db8::f816:3eff:fe29:3cbc'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:48 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:48.541 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4bc4012-7c81-4a7f-9a67-f9545d549873 in datapath cba39058-6a05-4f77-add1-57334b728a66 updated#033[00m Feb 1 04:56:48 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:48.544 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cba39058-6a05-4f77-add1-57334b728a66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:56:48 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:48.545 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[e7390015-f420-4dad-9d87-2665f41908a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e155 e155: 6 total, 6 up, 6 in Feb 1 04:56:49 localhost nova_compute[274317]: 2026-02-01 09:56:49.131 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:49 localhost nova_compute[274317]: 2026-02-01 09:56:49.131 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:56:49 localhost nova_compute[274317]: 2026-02-01 09:56:49.132 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:56:49 localhost nova_compute[274317]: 2026-02-01 09:56:49.241 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:56:49 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:49.800 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:50 localhost nova_compute[274317]: 2026-02-01 09:56:50.206 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 147 KiB/s rd, 11 KiB/s wr, 202 op/s Feb 1 04:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:56:50 localhost podman[310557]: 2026-02-01 09:56:50.88350384 +0000 UTC m=+0.085939574 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:56:50 localhost podman[310556]: 2026-02-01 09:56:50.938454142 +0000 UTC m=+0.142543928 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 04:56:50 localhost podman[310557]: 2026-02-01 09:56:50.953125837 +0000 UTC m=+0.155561581 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:56:50 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:56:50 localhost podman[310556]: 2026-02-01 09:56:50.97548681 +0000 UTC m=+0.179576626 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:56:51 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:56:51 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:51.104 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.2 2001:db8::f816:3eff:fe29:3cbc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6df31c80-e655-4133-9c32-9708470a03c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4bc4012-7c81-4a7f-9a67-f9545d549873) old=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:51 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:51.107 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4bc4012-7c81-4a7f-9a67-f9545d549873 in datapath cba39058-6a05-4f77-add1-57334b728a66 updated#033[00m Feb 1 04:56:51 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:51.110 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cba39058-6a05-4f77-add1-57334b728a66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:56:51 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:51.111 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[008ce95e-676c-4056-a059-e95116233bd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:51 localhost nova_compute[274317]: 2026-02-01 09:56:51.128 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:56:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:56:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:56:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:56:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:56:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:56:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e156 e156: 6 total, 6 up, 6 in Feb 1 04:56:51 localhost nova_compute[274317]: 2026-02-01 09:56:51.929 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:52 localhost nova_compute[274317]: 2026-02-01 09:56:52.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 121 KiB/s rd, 9.1 KiB/s wr, 166 op/s Feb 1 04:56:52 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:52.530 259225 INFO neutron.agent.linux.ip_lib [None req-7b487ffd-5635-401e-bacc-6723fac1006e - - - - - -] Device tap0ff05a29-3c cannot be used as it has no MAC address#033[00m Feb 1 04:56:52 localhost nova_compute[274317]: 2026-02-01 09:56:52.546 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:52 localhost kernel: device tap0ff05a29-3c entered promiscuous mode Feb 1 04:56:52 localhost ovn_controller[152787]: 2026-02-01T09:56:52Z|00166|binding|INFO|Claiming lport 0ff05a29-3cc7-4c1a-a005-225d700300ca for this chassis. Feb 1 04:56:52 localhost ovn_controller[152787]: 2026-02-01T09:56:52Z|00167|binding|INFO|0ff05a29-3cc7-4c1a-a005-225d700300ca: Claiming unknown Feb 1 04:56:52 localhost NetworkManager[5972]: [1769939812.5523] manager: (tap0ff05a29-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Feb 1 04:56:52 localhost nova_compute[274317]: 2026-02-01 09:56:52.552 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:52 localhost systemd-udevd[310612]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:56:52 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:52.571 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-c3e71f40-156c-4217-bedf-836f04a8f728', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3e71f40-156c-4217-bedf-836f04a8f728', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff200d66c230435098f5a0489bf1e8f7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4bd8115-ffb2-4415-a799-f41a6c9021b2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0ff05a29-3cc7-4c1a-a005-225d700300ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:52 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:52.572 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 0ff05a29-3cc7-4c1a-a005-225d700300ca in datapath c3e71f40-156c-4217-bedf-836f04a8f728 bound to our chassis#033[00m Feb 1 04:56:52 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:52.573 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3e71f40-156c-4217-bedf-836f04a8f728 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:56:52 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:52.573 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[93052da0-ea8a-4b5d-92f1-b69ac8f8c43b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:52 localhost journal[224955]: ethtool ioctl error on tap0ff05a29-3c: No such device Feb 1 04:56:52 localhost ovn_controller[152787]: 2026-02-01T09:56:52Z|00168|binding|INFO|Setting lport 0ff05a29-3cc7-4c1a-a005-225d700300ca ovn-installed in OVS Feb 1 04:56:52 localhost ovn_controller[152787]: 2026-02-01T09:56:52Z|00169|binding|INFO|Setting lport 0ff05a29-3cc7-4c1a-a005-225d700300ca up in Southbound Feb 1 04:56:52 localhost journal[224955]: ethtool ioctl error on tap0ff05a29-3c: No such device Feb 1 04:56:52 localhost nova_compute[274317]: 2026-02-01 09:56:52.582 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:52 localhost journal[224955]: ethtool ioctl error on tap0ff05a29-3c: No such device Feb 1 04:56:52 localhost journal[224955]: ethtool ioctl error on tap0ff05a29-3c: No such device Feb 1 04:56:52 localhost journal[224955]: ethtool ioctl error on tap0ff05a29-3c: No such device Feb 1 04:56:52 localhost journal[224955]: ethtool ioctl error on tap0ff05a29-3c: No such device Feb 1 04:56:52 localhost journal[224955]: ethtool ioctl error on tap0ff05a29-3c: No such device Feb 1 04:56:52 localhost journal[224955]: ethtool ioctl error on tap0ff05a29-3c: No such device Feb 1 04:56:52 localhost nova_compute[274317]: 2026-02-01 09:56:52.604 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:52 localhost nova_compute[274317]: 2026-02-01 09:56:52.628 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:52 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:52.736 2 INFO neutron.agent.securitygroups_rpc [None req-28352136-f461-4efb-990d-d0ac566ee992 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:56:52 localhost nova_compute[274317]: 2026-02-01 09:56:52.972 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.118 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.118 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.119 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.119 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.120 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:56:53 localhost podman[310703]: Feb 1 04:56:53 localhost podman[310703]: 2026-02-01 09:56:53.468055412 +0000 UTC m=+0.113186387 container create 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2) Feb 1 04:56:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:56:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:56:53 localhost systemd[1]: Started libpod-conmon-8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51.scope. Feb 1 04:56:53 localhost podman[310703]: 2026-02-01 09:56:53.409505238 +0000 UTC m=+0.054636273 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:56:53 localhost systemd[1]: Started libcrun container. Feb 1 04:56:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c22aff628e6bc84ba432acd1a0ec47a0f890d608bcc6d6b65fb5e1bf052ca32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:56:53 localhost podman[310716]: 2026-02-01 09:56:53.578589318 +0000 UTC m=+0.075346156 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, io.buildah.version=1.33.7, build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, config_id=openstack_network_exporter, version=9.7, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=) Feb 1 04:56:53 localhost podman[310716]: 2026-02-01 09:56:53.591738155 +0000 UTC m=+0.088495023 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal) Feb 1 04:56:53 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:56:53 localhost podman[310718]: 2026-02-01 09:56:53.646572814 +0000 UTC m=+0.141776114 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 1 04:56:53 localhost podman[310703]: 2026-02-01 09:56:53.653186329 +0000 UTC m=+0.298317264 container init 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127) Feb 1 04:56:53 localhost podman[310703]: 2026-02-01 09:56:53.662423386 +0000 UTC m=+0.307554311 container start 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 1 04:56:53 localhost dnsmasq[310765]: started, version 2.85 cachesize 150 Feb 1 04:56:53 localhost dnsmasq[310765]: DNS service limited to local subnets Feb 1 04:56:53 localhost dnsmasq[310765]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:56:53 localhost dnsmasq[310765]: warning: no upstream servers configured Feb 1 04:56:53 localhost dnsmasq-dhcp[310765]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:56:53 localhost dnsmasq[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/addn_hosts - 0 addresses Feb 1 04:56:53 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/host Feb 1 04:56:53 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/opts Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.669 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:56:53 localhost podman[310718]: 2026-02-01 09:56:53.677053499 +0000 UTC m=+0.172256839 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:56:53 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:56:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:53.734 259225 INFO neutron.agent.linux.ip_lib [None req-bcfb9cac-69bb-453a-8579-2cf0d4687405 - - - - - -] Device tapdf480a46-ff cannot be used as it has no MAC address#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.759 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:53 localhost kernel: device tapdf480a46-ff entered promiscuous mode Feb 1 04:56:53 localhost NetworkManager[5972]: [1769939813.7660] manager: (tapdf480a46-ff): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Feb 1 04:56:53 localhost ovn_controller[152787]: 2026-02-01T09:56:53Z|00170|binding|INFO|Claiming lport df480a46-ffeb-469e-8528-f16d97851fd4 for this chassis. Feb 1 04:56:53 localhost ovn_controller[152787]: 2026-02-01T09:56:53Z|00171|binding|INFO|df480a46-ffeb-469e-8528-f16d97851fd4: Claiming unknown Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.766 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:53 localhost journal[224955]: ethtool ioctl error on tapdf480a46-ff: No such device Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.796 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:53 localhost ovn_controller[152787]: 2026-02-01T09:56:53Z|00172|binding|INFO|Setting lport df480a46-ffeb-469e-8528-f16d97851fd4 ovn-installed in OVS Feb 1 04:56:53 localhost journal[224955]: ethtool ioctl error on tapdf480a46-ff: No such device Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.799 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.800 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:53 localhost journal[224955]: ethtool ioctl error on tapdf480a46-ff: No such device Feb 1 04:56:53 localhost journal[224955]: ethtool ioctl error on tapdf480a46-ff: No such device Feb 1 04:56:53 localhost journal[224955]: ethtool ioctl error on tapdf480a46-ff: No such device Feb 1 04:56:53 localhost journal[224955]: ethtool ioctl error on tapdf480a46-ff: No such device Feb 1 04:56:53 localhost journal[224955]: ethtool ioctl error on tapdf480a46-ff: No such device Feb 1 04:56:53 localhost ovn_controller[152787]: 2026-02-01T09:56:53Z|00173|binding|INFO|Setting lport df480a46-ffeb-469e-8528-f16d97851fd4 up in Southbound Feb 1 04:56:53 localhost journal[224955]: ethtool ioctl error on tapdf480a46-ff: No such device Feb 1 04:56:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:53.833 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09d03f879db542be8bf676bafcc9ce36', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d1a8906-fc18-4fe5-9368-552a4dec9770, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=df480a46-ffeb-469e-8528-f16d97851fd4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:53.836 158655 INFO neutron.agent.ovn.metadata.agent [-] Port df480a46-ffeb-469e-8528-f16d97851fd4 in datapath 5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09 bound to our chassis#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.840 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:53.841 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 595dc249-884d-44f8-8888-d36a32f65dc4 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:56:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:53.842 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:56:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:53.843 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[4f96d9af-bf58-4615-b18e-6fd0b8d43927]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:53.862 259225 INFO neutron.agent.dhcp.agent [None req-46896adc-d14d-4e43-b82c-1100afb88bbe - - - - - -] DHCP configuration for ports {'2e5f5375-a62e-44d2-a494-38636ec2aecf'} is completed#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.866 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.877 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.878 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11589MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.879 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.879 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.922 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:56:53 localhost nova_compute[274317]: 2026-02-01 09:56:53.922 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:56:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 4.3 KiB/s wr, 77 op/s Feb 1 04:56:54 localhost nova_compute[274317]: 2026-02-01 09:56:54.258 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:56:54 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:54.389 2 INFO neutron.agent.securitygroups_rpc [None req-beca9d48-bb68-446a-9132-2fee37d11230 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:56:54 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:56:54 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2613574519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:56:54 localhost nova_compute[274317]: 2026-02-01 09:56:54.681 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:56:54 localhost nova_compute[274317]: 2026-02-01 09:56:54.688 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:56:54 localhost podman[310861]: Feb 1 04:56:54 localhost podman[310861]: 2026-02-01 09:56:54.70533395 +0000 UTC m=+0.093057314 container create 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:56:54 localhost nova_compute[274317]: 2026-02-01 09:56:54.711 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:56:54 localhost nova_compute[274317]: 2026-02-01 09:56:54.749 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:56:54 localhost nova_compute[274317]: 2026-02-01 09:56:54.750 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:56:54 localhost systemd[1]: Started libpod-conmon-2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254.scope. Feb 1 04:56:54 localhost podman[310861]: 2026-02-01 09:56:54.659896852 +0000 UTC m=+0.047620276 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:56:54 localhost systemd[1]: Started libcrun container. Feb 1 04:56:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6cde7f4650549dcf14a16d5d08a1cd963e6a89846ce81897519e9e109b5636/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:56:54 localhost podman[310861]: 2026-02-01 09:56:54.781420427 +0000 UTC m=+0.169143791 container init 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 04:56:54 localhost podman[310861]: 2026-02-01 09:56:54.78700874 +0000 UTC m=+0.174732104 container start 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 1 04:56:54 localhost dnsmasq[310881]: started, version 2.85 cachesize 150 Feb 1 04:56:54 localhost dnsmasq[310881]: DNS service limited to local subnets Feb 1 04:56:54 localhost dnsmasq[310881]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:56:54 localhost dnsmasq[310881]: warning: no upstream servers configured Feb 1 04:56:54 localhost dnsmasq-dhcp[310881]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:56:54 localhost dnsmasq[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/addn_hosts - 0 addresses Feb 1 04:56:54 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/host Feb 1 04:56:54 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/opts Feb 1 04:56:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:54.919 259225 INFO neutron.agent.dhcp.agent [None req-56a2cf73-7adf-4064-bd1d-bed08073b81f - - - - - -] DHCP configuration for ports {'6b99df92-9ee2-429d-9ef9-469fa1e443e4'} is completed#033[00m Feb 1 04:56:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:55.017 259225 INFO neutron.agent.linux.ip_lib [None req-d837ecee-c6d9-4301-8d79-0169529fdbad - - - - - -] Device tap70e8c4ee-b7 cannot be used as it has no MAC address#033[00m Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.036 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:55 localhost kernel: device tap70e8c4ee-b7 entered promiscuous mode Feb 1 04:56:55 localhost NetworkManager[5972]: [1769939815.0434] manager: (tap70e8c4ee-b7): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.047 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.051 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:55 localhost ovn_controller[152787]: 2026-02-01T09:56:55Z|00174|binding|INFO|Claiming lport 70e8c4ee-b7bf-45c9-80c5-43450e09967e for this chassis. Feb 1 04:56:55 localhost ovn_controller[152787]: 2026-02-01T09:56:55Z|00175|binding|INFO|70e8c4ee-b7bf-45c9-80c5-43450e09967e: Claiming unknown Feb 1 04:56:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:55.075 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9279ffc0dc2f48079045ce3d49e21210', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2934a88b-2cb8-43fc-bc4a-0266d2f826b9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=70e8c4ee-b7bf-45c9-80c5-43450e09967e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:55.077 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 70e8c4ee-b7bf-45c9-80c5-43450e09967e in datapath 0ac2ccf3-74d8-4f0a-903f-4cf43406d18d bound to our chassis#033[00m Feb 1 04:56:55 localhost ovn_controller[152787]: 2026-02-01T09:56:55Z|00176|binding|INFO|Setting lport 70e8c4ee-b7bf-45c9-80c5-43450e09967e ovn-installed in OVS Feb 1 04:56:55 localhost ovn_controller[152787]: 2026-02-01T09:56:55Z|00177|binding|INFO|Setting lport 70e8c4ee-b7bf-45c9-80c5-43450e09967e up in Southbound Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.079 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:55.081 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0ac2ccf3-74d8-4f0a-903f-4cf43406d18d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.082 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:55 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:55.082 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7245f5-6413-44e2-b0f5-d15659330028]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.115 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.140 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:55 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:55.254 2 INFO neutron.agent.securitygroups_rpc [None req-4f603dff-697a-4023-bd41-ff0e5bb72114 0662eb14260a4e0584613789ed9c9820 ec2f419434374ceeb2aabac212e109be - - default default] Security group member updated ['e8a8d0ce-a79e-4888-bdec-0f79f8d34440']#033[00m Feb 1 04:56:55 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:55.406 2 INFO neutron.agent.securitygroups_rpc [None req-4f603dff-697a-4023-bd41-ff0e5bb72114 0662eb14260a4e0584613789ed9c9820 ec2f419434374ceeb2aabac212e109be - - default default] Security group member updated ['e8a8d0ce-a79e-4888-bdec-0f79f8d34440']#033[00m Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.752 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.752 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.753 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.753 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:55 localhost nova_compute[274317]: 2026-02-01 09:56:55.753 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:56:55 localhost podman[310945]: Feb 1 04:56:55 localhost podman[310945]: 2026-02-01 09:56:55.991516881 +0000 UTC m=+0.096314074 container create 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:56:56 localhost systemd[1]: Started libpod-conmon-6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c.scope. Feb 1 04:56:56 localhost podman[310945]: 2026-02-01 09:56:55.938745636 +0000 UTC m=+0.043542879 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:56:56 localhost systemd[1]: Started libcrun container. Feb 1 04:56:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a141c608be1b875224d2f7067e777389f3126fc9994644b0ea89131e8d650861/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:56:56 localhost podman[310945]: 2026-02-01 09:56:56.058382533 +0000 UTC m=+0.163179736 container init 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 1 04:56:56 localhost podman[310945]: 2026-02-01 09:56:56.066397082 +0000 UTC m=+0.171194275 container start 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:56:56 localhost dnsmasq[310963]: started, version 2.85 cachesize 150 Feb 1 04:56:56 localhost dnsmasq[310963]: DNS service limited to local subnets Feb 1 04:56:56 localhost dnsmasq[310963]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:56:56 localhost dnsmasq[310963]: warning: no upstream servers configured Feb 1 04:56:56 localhost dnsmasq-dhcp[310963]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:56:56 localhost dnsmasq[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/addn_hosts - 0 addresses Feb 1 04:56:56 localhost dnsmasq-dhcp[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/host Feb 1 04:56:56 localhost dnsmasq-dhcp[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/opts Feb 1 04:56:56 localhost nova_compute[274317]: 2026-02-01 09:56:56.130 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:56 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:56.174 259225 INFO neutron.agent.dhcp.agent [None req-7e9bc58b-b11d-4511-9264-9335808c921a - - - - - -] DHCP configuration for ports {'0c2a82db-0feb-4b25-b844-2b222d4e123e'} is completed#033[00m Feb 1 04:56:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s Feb 1 04:56:56 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:56.282 2 INFO neutron.agent.securitygroups_rpc [None req-8fcd2e7b-6ccb-4a1d-8200-e9004b8005a0 0662eb14260a4e0584613789ed9c9820 ec2f419434374ceeb2aabac212e109be - - default default] Security group member updated ['e8a8d0ce-a79e-4888-bdec-0f79f8d34440']#033[00m Feb 1 04:56:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e157 e157: 6 total, 6 up, 6 in Feb 1 04:56:56 localhost nova_compute[274317]: 2026-02-01 09:56:56.931 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:56:57 localhost neutron_sriov_agent[252054]: 2026-02-01 09:56:57.427 2 INFO neutron.agent.securitygroups_rpc [None req-1f1c195c-f9d6-4ec8-8caa-a0e049b01499 0662eb14260a4e0584613789ed9c9820 ec2f419434374ceeb2aabac212e109be - - default default] Security group member updated ['e8a8d0ce-a79e-4888-bdec-0f79f8d34440']#033[00m Feb 1 04:56:57 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:56:57.481 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:56:58 localhost nova_compute[274317]: 2026-02-01 09:56:58.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:56:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s Feb 1 04:56:59 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:59.340 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6df31c80-e655-4133-9c32-9708470a03c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4bc4012-7c81-4a7f-9a67-f9545d549873) old=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.2 2001:db8::f816:3eff:fe29:3cbc'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:56:59 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:59.343 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4bc4012-7c81-4a7f-9a67-f9545d549873 in datapath cba39058-6a05-4f77-add1-57334b728a66 updated#033[00m Feb 1 04:56:59 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:59.347 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cba39058-6a05-4f77-add1-57334b728a66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:56:59 localhost ovn_metadata_agent[158650]: 2026-02-01 09:56:59.349 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[f3aee197-57cc-4456-b9c8-ad3ab177f1ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:57:00 localhost podman[236852]: time="2026-02-01T09:57:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:57:00 localhost podman[236852]: @ - - [01/Feb/2026:09:57:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 162645 "" "Go-http-client/1.1" Feb 1 04:57:00 localhost podman[236852]: @ - - [01/Feb/2026:09:57:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20191 "" "Go-http-client/1.1" Feb 1 04:57:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 58 op/s Feb 1 04:57:00 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:00.768 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:00Z, description=, device_id=275f8795-f4d8-4210-a735-0c3c1fecd4e3, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9ddb04c6-905b-4fad-ae32-f2d22672d3a0, ip_allocation=immediate, mac_address=fa:16:3e:61:e3:6e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:56:51Z, description=, dns_domain=, id=0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1711716350-network, port_security_enabled=True, project_id=9279ffc0dc2f48079045ce3d49e21210, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7741, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2064, status=ACTIVE, subnets=['e7952a0f-0365-4883-af11-767cb701197e'], tags=[], tenant_id=9279ffc0dc2f48079045ce3d49e21210, updated_at=2026-02-01T09:56:53Z, vlan_transparent=None, network_id=0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, port_security_enabled=False, project_id=9279ffc0dc2f48079045ce3d49e21210, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2106, status=DOWN, tags=[], tenant_id=9279ffc0dc2f48079045ce3d49e21210, updated_at=2026-02-01T09:57:00Z on network 0ac2ccf3-74d8-4f0a-903f-4cf43406d18d#033[00m Feb 1 04:57:00 localhost podman[310981]: 2026-02-01 09:57:00.969186365 +0000 UTC m=+0.063657864 container kill 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 1 04:57:00 localhost dnsmasq[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/addn_hosts - 1 addresses Feb 1 04:57:00 localhost dnsmasq-dhcp[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/host Feb 1 04:57:00 localhost dnsmasq-dhcp[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/opts Feb 1 04:57:01 localhost nova_compute[274317]: 2026-02-01 09:57:01.131 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:01.211 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:00Z, description=, device_id=3d164407-ec04-4038-9224-5241a42e0a84, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=77bd7428-4576-4cae-b8c9-78b8c2c7ed62, ip_allocation=immediate, mac_address=fa:16:3e:22:32:32, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:56:49Z, description=, dns_domain=, id=5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1557564655, port_security_enabled=True, project_id=09d03f879db542be8bf676bafcc9ce36, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40555, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2038, status=ACTIVE, subnets=['1febdf11-5537-42ea-a6e2-0feca3467664'], tags=[], tenant_id=09d03f879db542be8bf676bafcc9ce36, updated_at=2026-02-01T09:56:51Z, vlan_transparent=None, network_id=5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, port_security_enabled=False, project_id=09d03f879db542be8bf676bafcc9ce36, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2102, status=DOWN, tags=[], tenant_id=09d03f879db542be8bf676bafcc9ce36, updated_at=2026-02-01T09:57:00Z on network 5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09#033[00m Feb 1 04:57:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:01.268 259225 INFO neutron.agent.dhcp.agent [None req-2b6fd1ab-101b-498c-b25b-fa58d40768e4 - - - - - -] DHCP configuration for ports {'9ddb04c6-905b-4fad-ae32-f2d22672d3a0'} is completed#033[00m Feb 1 04:57:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:01.335 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:00Z, description=, device_id=cbac6734-8188-48fb-9a41-b0c64ce89f6d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a50c835e-1670-4628-bb4d-c64fd6100a0a, ip_allocation=immediate, mac_address=fa:16:3e:bb:19:44, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:56:48Z, description=, dns_domain=, id=c3e71f40-156c-4217-bedf-836f04a8f728, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-2085708237-network, port_security_enabled=True, project_id=ff200d66c230435098f5a0489bf1e8f7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55349, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2036, status=ACTIVE, subnets=['098397c5-98ca-4cc3-a654-3c1e4a604734'], tags=[], tenant_id=ff200d66c230435098f5a0489bf1e8f7, updated_at=2026-02-01T09:56:50Z, vlan_transparent=None, network_id=c3e71f40-156c-4217-bedf-836f04a8f728, port_security_enabled=False, project_id=ff200d66c230435098f5a0489bf1e8f7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2104, status=DOWN, tags=[], tenant_id=ff200d66c230435098f5a0489bf1e8f7, updated_at=2026-02-01T09:57:00Z on network c3e71f40-156c-4217-bedf-836f04a8f728#033[00m Feb 1 04:57:01 localhost dnsmasq[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/addn_hosts - 1 addresses Feb 1 04:57:01 localhost podman[311019]: 2026-02-01 09:57:01.444715869 +0000 UTC m=+0.065330416 container kill 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:57:01 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/host Feb 1 04:57:01 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/opts Feb 1 04:57:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:57:01 localhost podman[311049]: 2026-02-01 09:57:01.558780433 +0000 UTC m=+0.087238344 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:57:01 localhost openstack_network_exporter[239388]: ERROR 09:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:57:01 localhost openstack_network_exporter[239388]: Feb 1 04:57:01 localhost openstack_network_exporter[239388]: ERROR 09:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:57:01 localhost openstack_network_exporter[239388]: Feb 1 04:57:01 localhost dnsmasq[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/addn_hosts - 1 addresses Feb 1 04:57:01 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/host Feb 1 04:57:01 localhost podman[311062]: 2026-02-01 09:57:01.586655777 +0000 UTC m=+0.078017529 container kill 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:57:01 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/opts Feb 1 04:57:01 localhost podman[311049]: 2026-02-01 09:57:01.624210091 +0000 UTC m=+0.152668002 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:57:01 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:57:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:01.682 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:00Z, description=, device_id=275f8795-f4d8-4210-a735-0c3c1fecd4e3, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9ddb04c6-905b-4fad-ae32-f2d22672d3a0, ip_allocation=immediate, mac_address=fa:16:3e:61:e3:6e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:56:51Z, description=, dns_domain=, id=0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1711716350-network, port_security_enabled=True, project_id=9279ffc0dc2f48079045ce3d49e21210, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7741, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2064, status=ACTIVE, subnets=['e7952a0f-0365-4883-af11-767cb701197e'], tags=[], tenant_id=9279ffc0dc2f48079045ce3d49e21210, updated_at=2026-02-01T09:56:53Z, vlan_transparent=None, network_id=0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, port_security_enabled=False, project_id=9279ffc0dc2f48079045ce3d49e21210, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2106, status=DOWN, tags=[], tenant_id=9279ffc0dc2f48079045ce3d49e21210, updated_at=2026-02-01T09:57:00Z on network 0ac2ccf3-74d8-4f0a-903f-4cf43406d18d#033[00m Feb 1 04:57:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:01.779 259225 INFO neutron.agent.dhcp.agent [None req-fefd1530-5c19-427b-9291-6aa45e22d6f1 - - - - - -] DHCP configuration for ports {'77bd7428-4576-4cae-b8c9-78b8c2c7ed62'} is completed#033[00m Feb 1 04:57:01 localhost dnsmasq[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/addn_hosts - 1 addresses Feb 1 04:57:01 localhost dnsmasq-dhcp[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/host Feb 1 04:57:01 localhost podman[311110]: 2026-02-01 09:57:01.901797052 +0000 UTC m=+0.056043738 container kill 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:57:01 localhost dnsmasq-dhcp[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/opts Feb 1 04:57:01 localhost nova_compute[274317]: 2026-02-01 09:57:01.933 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:01 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:01.951 259225 INFO neutron.agent.dhcp.agent [None req-8dc8ecaf-b926-4324-8302-4d607bd08a07 - - - - - -] DHCP configuration for ports {'a50c835e-1670-4628-bb4d-c64fd6100a0a'} is completed#033[00m Feb 1 04:57:02 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:02.110 259225 INFO neutron.agent.dhcp.agent [None req-a7ee453f-38a8-40dc-abad-67dbaa0fb6d7 - - - - - -] DHCP configuration for ports {'9ddb04c6-905b-4fad-ae32-f2d22672d3a0'} is completed#033[00m Feb 1 04:57:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 2.8 KiB/s wr, 49 op/s Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.407 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.408 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:57:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:57:03 localhost ovn_controller[152787]: 2026-02-01T09:57:03Z|00178|ovn_bfd|INFO|Enabled BFD on interface ovn-2186fb-0 Feb 1 04:57:03 localhost ovn_controller[152787]: 2026-02-01T09:57:03Z|00179|ovn_bfd|INFO|Enabled BFD on interface ovn-e1cc33-0 Feb 1 04:57:03 localhost ovn_controller[152787]: 2026-02-01T09:57:03Z|00180|ovn_bfd|INFO|Enabled BFD on interface ovn-45aa31-0 Feb 1 04:57:03 localhost nova_compute[274317]: 2026-02-01 09:57:03.934 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:03 localhost nova_compute[274317]: 2026-02-01 09:57:03.947 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:03 localhost nova_compute[274317]: 2026-02-01 09:57:03.955 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:03 localhost nova_compute[274317]: 2026-02-01 09:57:03.957 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:03 localhost nova_compute[274317]: 2026-02-01 09:57:03.998 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:04 localhost nova_compute[274317]: 2026-02-01 09:57:04.026 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail Feb 1 04:57:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:04.929 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.3 2001:db8::f816:3eff:fe29:3cbc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6df31c80-e655-4133-9c32-9708470a03c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4bc4012-7c81-4a7f-9a67-f9545d549873) old=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:57:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:04.931 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4bc4012-7c81-4a7f-9a67-f9545d549873 in datapath cba39058-6a05-4f77-add1-57334b728a66 updated#033[00m Feb 1 04:57:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:04.935 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cba39058-6a05-4f77-add1-57334b728a66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:57:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:04.936 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbce85e-186a-48f0-a92f-bfd4c6fea436]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:57:04 localhost nova_compute[274317]: 2026-02-01 09:57:04.967 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:05 localhost nova_compute[274317]: 2026-02-01 09:57:05.016 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:05.068 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:00Z, description=, device_id=cbac6734-8188-48fb-9a41-b0c64ce89f6d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a50c835e-1670-4628-bb4d-c64fd6100a0a, ip_allocation=immediate, mac_address=fa:16:3e:bb:19:44, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:56:48Z, description=, dns_domain=, id=c3e71f40-156c-4217-bedf-836f04a8f728, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-2085708237-network, port_security_enabled=True, project_id=ff200d66c230435098f5a0489bf1e8f7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55349, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2036, status=ACTIVE, subnets=['098397c5-98ca-4cc3-a654-3c1e4a604734'], tags=[], tenant_id=ff200d66c230435098f5a0489bf1e8f7, updated_at=2026-02-01T09:56:50Z, vlan_transparent=None, network_id=c3e71f40-156c-4217-bedf-836f04a8f728, port_security_enabled=False, project_id=ff200d66c230435098f5a0489bf1e8f7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2104, status=DOWN, tags=[], tenant_id=ff200d66c230435098f5a0489bf1e8f7, updated_at=2026-02-01T09:57:00Z on network c3e71f40-156c-4217-bedf-836f04a8f728#033[00m Feb 1 04:57:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:05.083 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:00Z, description=, device_id=3d164407-ec04-4038-9224-5241a42e0a84, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=77bd7428-4576-4cae-b8c9-78b8c2c7ed62, ip_allocation=immediate, mac_address=fa:16:3e:22:32:32, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:56:49Z, description=, dns_domain=, id=5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1557564655, port_security_enabled=True, project_id=09d03f879db542be8bf676bafcc9ce36, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40555, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2038, status=ACTIVE, subnets=['1febdf11-5537-42ea-a6e2-0feca3467664'], tags=[], tenant_id=09d03f879db542be8bf676bafcc9ce36, updated_at=2026-02-01T09:56:51Z, vlan_transparent=None, network_id=5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, port_security_enabled=False, project_id=09d03f879db542be8bf676bafcc9ce36, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2102, status=DOWN, tags=[], tenant_id=09d03f879db542be8bf676bafcc9ce36, updated_at=2026-02-01T09:57:00Z on network 5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09#033[00m Feb 1 04:57:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:05 localhost dnsmasq[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/addn_hosts - 1 addresses Feb 1 04:57:05 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/host Feb 1 04:57:05 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/opts Feb 1 04:57:05 localhost podman[311163]: 2026-02-01 09:57:05.301901744 +0000 UTC m=+0.064568032 container kill 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:57:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:57:05 localhost podman[311176]: 2026-02-01 09:57:05.360334565 +0000 UTC m=+0.067145492 container kill 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:57:05 localhost dnsmasq[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/addn_hosts - 1 addresses Feb 1 04:57:05 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/host Feb 1 04:57:05 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/opts Feb 1 04:57:05 localhost podman[311187]: 2026-02-01 09:57:05.420358765 +0000 UTC m=+0.090879428 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:57:05 localhost podman[311187]: 2026-02-01 09:57:05.436620118 +0000 UTC m=+0.107140781 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:57:05 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:57:05 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:05.759 259225 INFO neutron.agent.dhcp.agent [None req-b0932cf5-2af1-4b0b-b4c2-e89359cef3e8 - - - - - -] DHCP configuration for ports {'a50c835e-1670-4628-bb4d-c64fd6100a0a', '77bd7428-4576-4cae-b8c9-78b8c2c7ed62'} is completed#033[00m Feb 1 04:57:05 localhost nova_compute[274317]: 2026-02-01 09:57:05.804 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:06 localhost nova_compute[274317]: 2026-02-01 09:57:06.133 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail Feb 1 04:57:06 localhost nova_compute[274317]: 2026-02-01 09:57:06.982 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 355 B/s wr, 3 op/s Feb 1 04:57:08 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:08.345 2 INFO neutron.agent.securitygroups_rpc [None req-6b01ca21-428c-44eb-a29a-b5d48a46bb1b e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:08 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:08.908 2 INFO neutron.agent.securitygroups_rpc [None req-6ad3677b-8ed2-4afa-a1c1-27a49bcc11f3 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:57:09 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:09.003 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:08Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2eb5ca0e-8f6c-4dad-ae1f-dd77c07bc083, ip_allocation=immediate, mac_address=fa:16:3e:21:db:e0, name=tempest-FloatingIPTestJSON-1252841599, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:56:49Z, description=, dns_domain=, id=5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1557564655, port_security_enabled=True, project_id=09d03f879db542be8bf676bafcc9ce36, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40555, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2038, status=ACTIVE, subnets=['1febdf11-5537-42ea-a6e2-0feca3467664'], tags=[], tenant_id=09d03f879db542be8bf676bafcc9ce36, updated_at=2026-02-01T09:56:51Z, vlan_transparent=None, network_id=5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, port_security_enabled=True, project_id=09d03f879db542be8bf676bafcc9ce36, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7a11b431-4ecd-4461-a4ec-d66a85649c4d'], standard_attr_id=2139, status=DOWN, tags=[], tenant_id=09d03f879db542be8bf676bafcc9ce36, updated_at=2026-02-01T09:57:08Z on network 5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09#033[00m Feb 1 04:57:09 localhost dnsmasq[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/addn_hosts - 2 addresses Feb 1 04:57:09 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/host Feb 1 04:57:09 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/opts Feb 1 04:57:09 localhost podman[311245]: 2026-02-01 09:57:09.211856934 +0000 UTC m=+0.059498094 container kill 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 1 04:57:09 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:09.419 259225 INFO neutron.agent.dhcp.agent [None req-9feb0436-5fa6-482e-bbdf-cd27232ec498 - - - - - -] DHCP configuration for ports {'2eb5ca0e-8f6c-4dad-ae1f-dd77c07bc083'} is completed#033[00m Feb 1 04:57:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s Feb 1 04:57:10 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:10.902 2 INFO neutron.agent.securitygroups_rpc [None req-dadc567d-76ba-47fb-b0ef-4dff55f1d7c5 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:10 localhost nova_compute[274317]: 2026-02-01 09:57:10.984 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:11 localhost nova_compute[274317]: 2026-02-01 09:57:11.146 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:11 localhost nova_compute[274317]: 2026-02-01 09:57:11.986 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 145 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s Feb 1 04:57:12 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:12.280 2 INFO neutron.agent.securitygroups_rpc [None req-3e23b3f8-adbb-495f-8475-dfff7cdaa65d 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:57:12 localhost dnsmasq[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/addn_hosts - 1 addresses Feb 1 04:57:12 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/host Feb 1 04:57:12 localhost podman[311282]: 2026-02-01 09:57:12.539411808 +0000 UTC m=+0.056035157 container kill 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:57:12 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/opts Feb 1 04:57:14 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:14.017 2 INFO neutron.agent.securitygroups_rpc [None req-f8a158f2-98c4-421b-88cc-f32123223f2c d1f5486995624e27afb3baf89715ca46 3cb13cb2ee4e4e329cfbfe3e5fc9c8b9 - - default default] Security group member updated ['13bf76e1-cfaa-4be1-a8fe-9de3506dc4bd']#033[00m Feb 1 04:57:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 233 MiB data, 1004 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 7.3 MiB/s wr, 23 op/s Feb 1 04:57:14 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:14.289 2 INFO neutron.agent.securitygroups_rpc [None req-f8a158f2-98c4-421b-88cc-f32123223f2c d1f5486995624e27afb3baf89715ca46 3cb13cb2ee4e4e329cfbfe3e5fc9c8b9 - - default default] Security group member updated ['13bf76e1-cfaa-4be1-a8fe-9de3506dc4bd']#033[00m Feb 1 04:57:14 localhost dnsmasq[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/addn_hosts - 0 addresses Feb 1 04:57:14 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/host Feb 1 04:57:14 localhost dnsmasq-dhcp[310881]: read /var/lib/neutron/dhcp/5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09/opts Feb 1 04:57:14 localhost podman[311320]: 2026-02-01 09:57:14.492773493 +0000 UTC m=+0.064765178 container kill 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:57:14 localhost nova_compute[274317]: 2026-02-01 09:57:14.654 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:14 localhost kernel: device tapdf480a46-ff left promiscuous mode Feb 1 04:57:14 localhost ovn_controller[152787]: 2026-02-01T09:57:14Z|00181|binding|INFO|Releasing lport df480a46-ffeb-469e-8528-f16d97851fd4 from this chassis (sb_readonly=0) Feb 1 04:57:14 localhost ovn_controller[152787]: 2026-02-01T09:57:14Z|00182|binding|INFO|Setting lport df480a46-ffeb-469e-8528-f16d97851fd4 down in Southbound Feb 1 04:57:14 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:14.665 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09d03f879db542be8bf676bafcc9ce36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d1a8906-fc18-4fe5-9368-552a4dec9770, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=df480a46-ffeb-469e-8528-f16d97851fd4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:57:14 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:14.667 158655 INFO neutron.agent.ovn.metadata.agent [-] Port df480a46-ffeb-469e-8528-f16d97851fd4 in datapath 5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09 unbound from our chassis#033[00m Feb 1 04:57:14 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:14.671 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:57:14 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:14.672 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[48a0c1d7-f913-4e49-8ccd-4c48d7fbc972]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:57:14 localhost nova_compute[274317]: 2026-02-01 09:57:14.678 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:15 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:15.143 2 INFO neutron.agent.securitygroups_rpc [None req-49f91d64-6065-4341-b2b5-79ecb7af7da0 d1f5486995624e27afb3baf89715ca46 3cb13cb2ee4e4e329cfbfe3e5fc9c8b9 - - default default] Security group member updated ['13bf76e1-cfaa-4be1-a8fe-9de3506dc4bd']#033[00m Feb 1 04:57:15 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:15.583 2 INFO neutron.agent.securitygroups_rpc [None req-55fa4d7c-6084-454f-b0ae-c51e2ec0c52d d1f5486995624e27afb3baf89715ca46 3cb13cb2ee4e4e329cfbfe3e5fc9c8b9 - - default default] Security group member updated ['13bf76e1-cfaa-4be1-a8fe-9de3506dc4bd']#033[00m Feb 1 04:57:15 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:15.611 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:15 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:15.794 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:3c:bc 2001:db8::f816:3eff:fe29:3cbc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6df31c80-e655-4133-9c32-9708470a03c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4bc4012-7c81-4a7f-9a67-f9545d549873) old=Port_Binding(mac=['fa:16:3e:29:3c:bc 10.100.0.3 2001:db8::f816:3eff:fe29:3cbc'], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:57:15 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:15.796 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4bc4012-7c81-4a7f-9a67-f9545d549873 in datapath cba39058-6a05-4f77-add1-57334b728a66 updated#033[00m Feb 1 04:57:15 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:15.799 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cba39058-6a05-4f77-add1-57334b728a66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:57:15 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:15.800 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4ca8cc-b206-4772-9733-2a8d9ad628f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:57:16 localhost nova_compute[274317]: 2026-02-01 09:57:16.149 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 233 MiB data, 1004 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 7.3 MiB/s wr, 23 op/s Feb 1 04:57:16 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:16.690 2 INFO neutron.agent.securitygroups_rpc [None req-4b8cba9a-9cba-4ac1-97f8-0546b3fd4da5 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:16 localhost nova_compute[274317]: 2026-02-01 09:57:16.988 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:17.358 2 INFO neutron.agent.securitygroups_rpc [None req-0e44897f-42c5-42e0-aa55-3214c5bdaadc e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:17 localhost dnsmasq[310881]: exiting on receipt of SIGTERM Feb 1 04:57:17 localhost podman[311362]: 2026-02-01 09:57:17.615945905 +0000 UTC m=+0.057581525 container kill 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:57:17 localhost systemd[1]: libpod-2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254.scope: Deactivated successfully. Feb 1 04:57:17 localhost podman[311377]: 2026-02-01 09:57:17.69744025 +0000 UTC m=+0.059517995 container died 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:57:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254-userdata-shm.mount: Deactivated successfully. Feb 1 04:57:17 localhost systemd[1]: var-lib-containers-storage-overlay-8b6cde7f4650549dcf14a16d5d08a1cd963e6a89846ce81897519e9e109b5636-merged.mount: Deactivated successfully. Feb 1 04:57:17 localhost podman[311377]: 2026-02-01 09:57:17.795807788 +0000 UTC m=+0.157885463 container remove 2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bfedfe5-c714-4a56-95b1-3d4c6b3d2f09, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 1 04:57:17 localhost systemd[1]: libpod-conmon-2e1c2a3c8e99041ece37c80543564f85baab48786e71c097c189b4b153d55254.scope: Deactivated successfully. Feb 1 04:57:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 353 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 17 MiB/s wr, 59 op/s Feb 1 04:57:18 localhost systemd[1]: run-netns-qdhcp\x2d5bfedfe5\x2dc714\x2d4a56\x2d95b1\x2d3d4c6b3d2f09.mount: Deactivated successfully. Feb 1 04:57:18 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:18.401 259225 INFO neutron.agent.dhcp.agent [None req-bf851bee-a9f5-4eeb-86d1-8625335eb1bf - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:18 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:18.566 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:18 localhost dnsmasq[310310]: exiting on receipt of SIGTERM Feb 1 04:57:18 localhost podman[311419]: 2026-02-01 09:57:18.960412243 +0000 UTC m=+0.061961611 container kill d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0dbb3ef-d632-48b0-b256-d985cf33ea92, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:57:18 localhost systemd[1]: libpod-d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61.scope: Deactivated successfully. Feb 1 04:57:19 localhost podman[311433]: 2026-02-01 09:57:19.0345454 +0000 UTC m=+0.060402622 container died d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0dbb3ef-d632-48b0-b256-d985cf33ea92, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 1 04:57:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61-userdata-shm.mount: Deactivated successfully. Feb 1 04:57:19 localhost podman[311433]: 2026-02-01 09:57:19.066647035 +0000 UTC m=+0.092504197 container cleanup d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0dbb3ef-d632-48b0-b256-d985cf33ea92, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:57:19 localhost systemd[1]: libpod-conmon-d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61.scope: Deactivated successfully. Feb 1 04:57:19 localhost podman[311435]: 2026-02-01 09:57:19.113170967 +0000 UTC m=+0.126919645 container remove d5797bd4be489b76ff42f88c466cd274615e933da5e59a8088d05895a4206f61 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0dbb3ef-d632-48b0-b256-d985cf33ea92, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 04:57:19 localhost nova_compute[274317]: 2026-02-01 09:57:19.124 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:19 localhost ovn_controller[152787]: 2026-02-01T09:57:19Z|00183|binding|INFO|Releasing lport 89a67c8f-abeb-44ba-987b-710ed5812b98 from this chassis (sb_readonly=0) Feb 1 04:57:19 localhost kernel: device tap89a67c8f-ab left promiscuous mode Feb 1 04:57:19 localhost ovn_controller[152787]: 2026-02-01T09:57:19Z|00184|binding|INFO|Setting lport 89a67c8f-abeb-44ba-987b-710ed5812b98 down in Southbound Feb 1 04:57:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:19.136 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-c0dbb3ef-d632-48b0-b256-d985cf33ea92', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0dbb3ef-d632-48b0-b256-d985cf33ea92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d70c431093044779c88823510311e1a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7955782-fea5-4e19-bc74-89fb26d9b2eb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=89a67c8f-abeb-44ba-987b-710ed5812b98) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:57:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:19.139 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 89a67c8f-abeb-44ba-987b-710ed5812b98 in datapath c0dbb3ef-d632-48b0-b256-d985cf33ea92 unbound from our chassis#033[00m Feb 1 04:57:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:19.145 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c0dbb3ef-d632-48b0-b256-d985cf33ea92 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:57:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:19.146 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[a87e5c6d-4853-499b-aa72-9b671424bc60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:57:19 localhost nova_compute[274317]: 2026-02-01 09:57:19.149 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:19 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:19.151 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:19 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:19.413 259225 INFO neutron.agent.dhcp.agent [None req-dd00effc-1f7e-48f0-9bfe-25253988c234 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:19 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:19.413 259225 INFO neutron.agent.dhcp.agent [None req-dd00effc-1f7e-48f0-9bfe-25253988c234 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:19 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:19.420 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:19 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:19.455 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:19 localhost nova_compute[274317]: 2026-02-01 09:57:19.581 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:19 localhost systemd[1]: var-lib-containers-storage-overlay-ede7d4779104218ea621e19c563a4f5da37bdff6220daf39da5c830ef37d9d02-merged.mount: Deactivated successfully. Feb 1 04:57:19 localhost systemd[1]: run-netns-qdhcp\x2dc0dbb3ef\x2dd632\x2d48b0\x2db256\x2dd985cf33ea92.mount: Deactivated successfully. Feb 1 04:57:20 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:20.057 2 INFO neutron.agent.securitygroups_rpc [None req-66f0be61-daee-4cdf-a282-a2ed1512143e e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 353 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 17 MiB/s wr, 55 op/s Feb 1 04:57:20 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:20.498 2 INFO neutron.agent.securitygroups_rpc [None req-a90c741a-39d3-40c8-bb6e-b94dde79eb43 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:20 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:20.955 2 INFO neutron.agent.securitygroups_rpc [None req-6d308bdf-c84d-45b6-96f5-9c77d97fcd46 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:57:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:57:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3982317258' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:57:21 localhost nova_compute[274317]: 2026-02-01 09:57:21.151 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:57:21 Feb 1 04:57:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:57:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:57:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['.mgr', 'manila_data', 'vms', 'volumes', 'manila_metadata', 'backups', 'images'] Feb 1 04:57:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:57:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:57:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:57:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:57:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:57:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:57:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:57:21 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:21.583 2 INFO neutron.agent.securitygroups_rpc [None req-510b41ab-e548-47d9-b4e7-2bdc2eb9aebb 80e349351b8943ebac895c06dc769fa1 09d03f879db542be8bf676bafcc9ce36 - - default default] Security group member updated ['7a11b431-4ecd-4461-a4ec-d66a85649c4d']#033[00m Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002721761294900428 quantized to 32 (current 32) Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.01918918958217011 of space, bias 1.0, pg target 3.831441519906632 quantized to 32 (current 32) Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.34355806811837e-05 quantized to 32 (current 32) Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:57:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002137423227247348 quantized to 16 (current 16) Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:57:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:57:21 localhost podman[311461]: 2026-02-01 09:57:21.863641269 +0000 UTC m=+0.077729059 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2) Feb 1 04:57:21 localhost podman[311461]: 2026-02-01 09:57:21.90366215 +0000 UTC m=+0.117749940 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127) Feb 1 04:57:21 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:57:21 localhost podman[311462]: 2026-02-01 09:57:21.924412152 +0000 UTC m=+0.135302843 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:57:21 localhost podman[311462]: 2026-02-01 09:57:21.932499653 +0000 UTC m=+0.143390314 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 04:57:21 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:57:22 localhost nova_compute[274317]: 2026-02-01 09:57:22.008 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 353 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 17 MiB/s wr, 55 op/s Feb 1 04:57:22 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e158 e158: 6 total, 6 up, 6 in Feb 1 04:57:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e159 e159: 6 total, 6 up, 6 in Feb 1 04:57:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:57:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:57:23 localhost podman[311508]: 2026-02-01 09:57:23.881446682 +0000 UTC m=+0.095634085 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1769056855, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 1 04:57:23 localhost podman[311508]: 2026-02-01 09:57:23.889703928 +0000 UTC m=+0.103891341 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, io.openshift.expose-services=, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, vendor=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, release=1769056855, build-date=2026-01-22T05:09:47Z, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 1 04:57:23 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:57:23 localhost podman[311509]: 2026-02-01 09:57:23.97790971 +0000 UTC m=+0.187774659 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent) Feb 1 04:57:23 localhost podman[311509]: 2026-02-01 09:57:23.982952847 +0000 UTC m=+0.192817786 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:57:23 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:57:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 536 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 2.6 MiB/s rd, 35 MiB/s wr, 131 op/s Feb 1 04:57:24 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:24.953 2 INFO neutron.agent.securitygroups_rpc [None req-9d70792f-5f72-48f9-b951-877d0761d664 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:25 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:25.273 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:25 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:25.902 2 INFO neutron.agent.securitygroups_rpc [None req-4b206305-83e7-4f57-ba9f-2e24f96d5798 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:26 localhost nova_compute[274317]: 2026-02-01 09:57:26.153 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 536 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 48 KiB/s rd, 20 MiB/s wr, 77 op/s Feb 1 04:57:27 localhost nova_compute[274317]: 2026-02-01 09:57:27.049 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:27 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:27.099 2 INFO neutron.agent.securitygroups_rpc [None req-beee55c0-e969-4dc8-abc8-cdbdc16af93f 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:28 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:28.143 2 INFO neutron.agent.securitygroups_rpc [None req-b32fd076-8c86-4555-94ea-b4066e09ed5c e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 716 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 4.7 MiB/s rd, 38 MiB/s wr, 170 op/s Feb 1 04:57:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e160 e160: 6 total, 6 up, 6 in Feb 1 04:57:29 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:29.099 2 INFO neutron.agent.securitygroups_rpc [None req-2d3763ba-9699-499d-861c-79c864912ba7 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:29 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:29.202 2 INFO neutron.agent.securitygroups_rpc [None req-2d3763ba-9699-499d-861c-79c864912ba7 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:29 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:29.439 2 INFO neutron.agent.securitygroups_rpc [None req-2958eafe-4222-42a5-8f58-07ed853ce57d e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:29 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:29.550 2 INFO neutron.agent.securitygroups_rpc [None req-dc57259b-517e-469b-bbab-e176f8bbf6e4 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:30 localhost podman[236852]: time="2026-02-01T09:57:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:57:30 localhost podman[236852]: @ - - [01/Feb/2026:09:57:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159004 "" "Go-http-client/1.1" Feb 1 04:57:30 localhost podman[236852]: @ - - [01/Feb/2026:09:57:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19245 "" "Go-http-client/1.1" Feb 1 04:57:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 716 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 4.7 MiB/s rd, 18 MiB/s wr, 96 op/s Feb 1 04:57:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e161 e161: 6 total, 6 up, 6 in Feb 1 04:57:30 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:30.498 2 INFO neutron.agent.securitygroups_rpc [None req-a9710b04-3715-469f-ada4-600e33182b7e 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:30 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:30.534 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:31 localhost nova_compute[274317]: 2026-02-01 09:57:31.155 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.440543) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939851440574, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2620, "num_deletes": 264, "total_data_size": 4027248, "memory_usage": 4098752, "flush_reason": "Manual Compaction"} Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Feb 1 04:57:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e162 e162: 6 total, 6 up, 6 in Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939851459069, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2595391, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19778, "largest_seqno": 22393, "table_properties": {"data_size": 2585328, "index_size": 6440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21714, "raw_average_key_size": 21, "raw_value_size": 2564997, "raw_average_value_size": 2562, "num_data_blocks": 273, "num_entries": 1001, "num_filter_entries": 1001, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939696, "oldest_key_time": 1769939696, "file_creation_time": 1769939851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 18623 microseconds, and 7223 cpu microseconds. Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.459157) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2595391 bytes OK Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.459186) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.461181) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.461206) EVENT_LOG_v1 {"time_micros": 1769939851461200, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.461229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 4015460, prev total WAL file size 4015501, number of live WAL files 2. Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.462735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2534KB)], [30(18MB)] Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939851462804, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 21712062, "oldest_snapshot_seqno": -1} Feb 1 04:57:31 localhost openstack_network_exporter[239388]: ERROR 09:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:57:31 localhost openstack_network_exporter[239388]: Feb 1 04:57:31 localhost openstack_network_exporter[239388]: ERROR 09:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:57:31 localhost openstack_network_exporter[239388]: Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12653 keys, 19460804 bytes, temperature: kUnknown Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939851588949, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 19460804, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19389720, "index_size": 38343, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31685, "raw_key_size": 341153, "raw_average_key_size": 26, "raw_value_size": 19175060, "raw_average_value_size": 1515, "num_data_blocks": 1437, "num_entries": 12653, "num_filter_entries": 12653, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769939851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.589275) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 19460804 bytes Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.592280) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.0 rd, 154.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 18.2 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(15.9) write-amplify(7.5) OK, records in: 13196, records dropped: 543 output_compression: NoCompression Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.592347) EVENT_LOG_v1 {"time_micros": 1769939851592335, "job": 16, "event": "compaction_finished", "compaction_time_micros": 126214, "compaction_time_cpu_micros": 54284, "output_level": 6, "num_output_files": 1, "total_output_size": 19460804, "num_input_records": 13196, "num_output_records": 12653, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939851592861, "job": 16, "event": "table_file_deletion", "file_number": 32} Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939851595953, "job": 16, "event": "table_file_deletion", "file_number": 30} Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.462428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.596370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.596376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.596379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.596382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:57:31 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:57:31.596386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:57:31 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:31.802 2 INFO neutron.agent.securitygroups_rpc [None req-c82c049d-951f-4fb2-91dc-f79774ee784c e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:31 localhost systemd[1]: tmp-crun.KxU2t5.mount: Deactivated successfully. Feb 1 04:57:31 localhost podman[311544]: 2026-02-01 09:57:31.883892176 +0000 UTC m=+0.097534763 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 04:57:31 localhost podman[311544]: 2026-02-01 09:57:31.89434684 +0000 UTC m=+0.107989387 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127) Feb 1 04:57:31 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:57:31 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:31.940 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:57:31 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:31.942 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:57:31 localhost nova_compute[274317]: 2026-02-01 09:57:31.981 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:57:32 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3454001065' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:57:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:57:32 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3454001065' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:57:32 localhost nova_compute[274317]: 2026-02-01 09:57:32.053 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 716 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 6.2 MiB/s rd, 24 MiB/s wr, 125 op/s Feb 1 04:57:32 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:32.336 2 INFO neutron.agent.securitygroups_rpc [None req-ae637900-6d4f-4914-b5d3-eacda6bf763f e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e163 e163: 6 total, 6 up, 6 in Feb 1 04:57:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 838 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 1.1 MiB/s rd, 26 MiB/s wr, 156 op/s Feb 1 04:57:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:35 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:35.132 259225 INFO neutron.agent.linux.ip_lib [None req-7b00ed42-b72e-4f66-b893-28b4bb7bf099 - - - - - -] Device tapff2b3531-69 cannot be used as it has no MAC address#033[00m Feb 1 04:57:35 localhost nova_compute[274317]: 2026-02-01 09:57:35.153 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:35 localhost kernel: device tapff2b3531-69 entered promiscuous mode Feb 1 04:57:35 localhost NetworkManager[5972]: [1769939855.1620] manager: (tapff2b3531-69): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Feb 1 04:57:35 localhost nova_compute[274317]: 2026-02-01 09:57:35.162 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:35 localhost ovn_controller[152787]: 2026-02-01T09:57:35Z|00185|binding|INFO|Claiming lport ff2b3531-69db-424e-a495-69e43824d008 for this chassis. Feb 1 04:57:35 localhost ovn_controller[152787]: 2026-02-01T09:57:35Z|00186|binding|INFO|ff2b3531-69db-424e-a495-69e43824d008: Claiming unknown Feb 1 04:57:35 localhost systemd-udevd[311573]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:57:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:35.174 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-c94cfbe2-b38a-4f65-b5ff-344bf4929a50', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c94cfbe2-b38a-4f65-b5ff-344bf4929a50', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e1ea1a33e554968ba8ebaf6753c9c5d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=049a24bb-789a-44cb-8aa4-57bf18fabc72, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ff2b3531-69db-424e-a495-69e43824d008) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:57:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:35.176 158655 INFO neutron.agent.ovn.metadata.agent [-] Port ff2b3531-69db-424e-a495-69e43824d008 in datapath c94cfbe2-b38a-4f65-b5ff-344bf4929a50 bound to our chassis#033[00m Feb 1 04:57:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:35.180 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 153aae87-7053-41f9-b4ef-7b79c315171f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:57:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:35.180 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c94cfbe2-b38a-4f65-b5ff-344bf4929a50, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:57:35 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:35.184 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[85f0c8b0-e080-43fe-800d-43d4aa2ee05b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:57:35 localhost ovn_controller[152787]: 2026-02-01T09:57:35Z|00187|binding|INFO|Setting lport ff2b3531-69db-424e-a495-69e43824d008 ovn-installed in OVS Feb 1 04:57:35 localhost ovn_controller[152787]: 2026-02-01T09:57:35Z|00188|binding|INFO|Setting lport ff2b3531-69db-424e-a495-69e43824d008 up in Southbound Feb 1 04:57:35 localhost nova_compute[274317]: 2026-02-01 09:57:35.208 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:35 localhost nova_compute[274317]: 2026-02-01 09:57:35.243 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:35 localhost nova_compute[274317]: 2026-02-01 09:57:35.277 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:57:35 localhost podman[311606]: 2026-02-01 09:57:35.91210386 +0000 UTC m=+0.116545081 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:57:35 localhost podman[311606]: 2026-02-01 09:57:35.929498919 +0000 UTC m=+0.133940150 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:57:35 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:57:36 localhost nova_compute[274317]: 2026-02-01 09:57:36.157 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:36 localhost podman[311648]: Feb 1 04:57:36 localhost podman[311648]: 2026-02-01 09:57:36.233878301 +0000 UTC m=+0.088980669 container create 2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c94cfbe2-b38a-4f65-b5ff-344bf4929a50, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:57:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 838 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 1.1 MiB/s rd, 26 MiB/s wr, 153 op/s Feb 1 04:57:36 localhost systemd[1]: Started libpod-conmon-2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb.scope. Feb 1 04:57:36 localhost podman[311648]: 2026-02-01 09:57:36.190536078 +0000 UTC m=+0.045638496 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:57:36 localhost systemd[1]: tmp-crun.oKGqmN.mount: Deactivated successfully. Feb 1 04:57:36 localhost systemd[1]: Started libcrun container. Feb 1 04:57:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be4dd53ceceb959b546f67efa71b185264c4777498bb167cd87c7cd7f3c9fe8d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:57:36 localhost podman[311648]: 2026-02-01 09:57:36.340891976 +0000 UTC m=+0.195994364 container init 2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c94cfbe2-b38a-4f65-b5ff-344bf4929a50, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:57:36 localhost podman[311648]: 2026-02-01 09:57:36.351174135 +0000 UTC m=+0.206276523 container start 2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c94cfbe2-b38a-4f65-b5ff-344bf4929a50, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:57:36 localhost dnsmasq[311666]: started, version 2.85 cachesize 150 Feb 1 04:57:36 localhost dnsmasq[311666]: DNS service limited to local subnets Feb 1 04:57:36 localhost dnsmasq[311666]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:57:36 localhost dnsmasq[311666]: warning: no upstream servers configured Feb 1 04:57:36 localhost dnsmasq-dhcp[311666]: DHCP, static leases only on 10.101.0.0, lease time 1d Feb 1 04:57:36 localhost dnsmasq[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/addn_hosts - 0 addresses Feb 1 04:57:36 localhost dnsmasq-dhcp[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/host Feb 1 04:57:36 localhost dnsmasq-dhcp[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/opts Feb 1 04:57:36 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:36.549 259225 INFO neutron.agent.dhcp.agent [None req-6ee81a67-a201-4f64-a965-657c7a7e324f - - - - - -] DHCP configuration for ports {'08753108-79e1-436c-9b23-2aa988e503fa'} is completed#033[00m Feb 1 04:57:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e164 e164: 6 total, 6 up, 6 in Feb 1 04:57:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e165 e165: 6 total, 6 up, 6 in Feb 1 04:57:37 localhost nova_compute[274317]: 2026-02-01 09:57:37.086 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:37 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:37.814 2 INFO neutron.agent.securitygroups_rpc [None req-0302dcee-a94e-448d-b9a1-97eb07e05bc2 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:38 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:38.257 2 INFO neutron.agent.securitygroups_rpc [None req-56c81e71-4e5b-4ed6-b56f-ca0ee463b60f 388100543d2c4f8fb0150ffdd8da2504 674a59d5810c453484339f60db55c64e - - default default] Security group member updated ['41d73aa2-6075-4985-b34c-e67fa66518ee']#033[00m Feb 1 04:57:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 889 MiB data, 3.0 GiB used, 39 GiB / 42 GiB avail; 1.1 MiB/s rd, 50 MiB/s wr, 264 op/s Feb 1 04:57:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:57:38 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:57:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:57:38 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:57:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:57:38 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 5dae15b9-7e9f-47ed-b9ed-103090603098 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:57:38 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 5dae15b9-7e9f-47ed-b9ed-103090603098 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:57:38 localhost ceph-mgr[278126]: [progress INFO root] Completed event 5dae15b9-7e9f-47ed-b9ed-103090603098 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:57:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:57:38 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:57:38 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:57:38 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:38.817 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:38Z, description=, device_id=a4c0ff24-9b72-4e78-ad9f-fbd408c26d38, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1096bc33-0247-4180-b3cc-295157fa16a5, ip_allocation=immediate, mac_address=fa:16:3e:21:3b:a6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:57:29Z, description=, dns_domain=, id=c94cfbe2-b38a-4f65-b5ff-344bf4929a50, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-482542227, port_security_enabled=True, project_id=3e1ea1a33e554968ba8ebaf6753c9c5d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4706, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2209, status=ACTIVE, subnets=['7d86d575-ccf2-4403-beee-fe491e92869a'], tags=[], tenant_id=3e1ea1a33e554968ba8ebaf6753c9c5d, updated_at=2026-02-01T09:57:33Z, vlan_transparent=None, network_id=c94cfbe2-b38a-4f65-b5ff-344bf4929a50, port_security_enabled=False, project_id=3e1ea1a33e554968ba8ebaf6753c9c5d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2273, status=DOWN, tags=[], tenant_id=3e1ea1a33e554968ba8ebaf6753c9c5d, updated_at=2026-02-01T09:57:38Z on network c94cfbe2-b38a-4f65-b5ff-344bf4929a50#033[00m Feb 1 04:57:39 localhost systemd[1]: tmp-crun.hgXKhx.mount: Deactivated successfully. Feb 1 04:57:39 localhost podman[311771]: 2026-02-01 09:57:39.095984423 +0000 UTC m=+0.060459025 container kill 2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c94cfbe2-b38a-4f65-b5ff-344bf4929a50, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:57:39 localhost dnsmasq[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/addn_hosts - 1 addresses Feb 1 04:57:39 localhost dnsmasq-dhcp[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/host Feb 1 04:57:39 localhost dnsmasq-dhcp[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/opts Feb 1 04:57:39 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:39.302 259225 INFO neutron.agent.dhcp.agent [None req-1fb6d6cd-53c0-4b42-84c7-340330e81d56 - - - - - -] DHCP configuration for ports {'1096bc33-0247-4180-b3cc-295157fa16a5'} is completed#033[00m Feb 1 04:57:39 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:39.361 2 INFO neutron.agent.securitygroups_rpc [None req-e6670585-76cd-441d-8f68-6f14a6f35b07 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:39 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:57:39 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:39.699 2 INFO neutron.agent.securitygroups_rpc [None req-d6e67e7e-f7e2-4516-bf2c-7113ac674e15 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:39 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:39.944 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:57:40 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:40.097 2 INFO neutron.agent.securitygroups_rpc [None req-1328bdc5-e5a5-40ba-b48b-34d65147d68f afad352e9d664799bf5de0cadcf3c7cd ff200d66c230435098f5a0489bf1e8f7 - - default default] Security group rule updated ['95400daf-a74d-4007-ac5f-e79aa8e5c1cd']#033[00m Feb 1 04:57:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 889 MiB data, 3.0 GiB used, 39 GiB / 42 GiB avail; 65 KiB/s rd, 21 MiB/s wr, 97 op/s Feb 1 04:57:40 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:40.520 2 INFO neutron.agent.securitygroups_rpc [None req-a0493fd0-d741-481f-ad73-307c48cc986a afad352e9d664799bf5de0cadcf3c7cd ff200d66c230435098f5a0489bf1e8f7 - - default default] Security group rule updated ['95400daf-a74d-4007-ac5f-e79aa8e5c1cd']#033[00m Feb 1 04:57:40 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:40.790 2 INFO neutron.agent.securitygroups_rpc [None req-ffd61713-cff0-4f50-b6e4-2930ab2a8c56 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:41 localhost nova_compute[274317]: 2026-02-01 09:57:41.196 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:41 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:41.550 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:38Z, description=, device_id=a4c0ff24-9b72-4e78-ad9f-fbd408c26d38, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1096bc33-0247-4180-b3cc-295157fa16a5, ip_allocation=immediate, mac_address=fa:16:3e:21:3b:a6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:57:29Z, description=, dns_domain=, id=c94cfbe2-b38a-4f65-b5ff-344bf4929a50, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-482542227, port_security_enabled=True, project_id=3e1ea1a33e554968ba8ebaf6753c9c5d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4706, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2209, status=ACTIVE, subnets=['7d86d575-ccf2-4403-beee-fe491e92869a'], tags=[], tenant_id=3e1ea1a33e554968ba8ebaf6753c9c5d, updated_at=2026-02-01T09:57:33Z, vlan_transparent=None, network_id=c94cfbe2-b38a-4f65-b5ff-344bf4929a50, port_security_enabled=False, project_id=3e1ea1a33e554968ba8ebaf6753c9c5d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2273, status=DOWN, tags=[], tenant_id=3e1ea1a33e554968ba8ebaf6753c9c5d, updated_at=2026-02-01T09:57:38Z on network c94cfbe2-b38a-4f65-b5ff-344bf4929a50#033[00m Feb 1 04:57:41 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:57:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:57:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e166 e166: 6 total, 6 up, 6 in Feb 1 04:57:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:41.774 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:57:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:41.775 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:57:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:41.775 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:57:41 localhost systemd[1]: tmp-crun.9znZ3y.mount: Deactivated successfully. Feb 1 04:57:41 localhost dnsmasq[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/addn_hosts - 1 addresses Feb 1 04:57:41 localhost dnsmasq-dhcp[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/host Feb 1 04:57:41 localhost dnsmasq-dhcp[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/opts Feb 1 04:57:41 localhost podman[311808]: 2026-02-01 09:57:41.793819015 +0000 UTC m=+0.082553059 container kill 2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c94cfbe2-b38a-4f65-b5ff-344bf4929a50, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:57:42 localhost nova_compute[274317]: 2026-02-01 09:57:42.088 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:42 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:42.103 259225 INFO neutron.agent.dhcp.agent [None req-b7df7686-a057-4d07-b6dc-0a5bd98054fa - - - - - -] DHCP configuration for ports {'1096bc33-0247-4180-b3cc-295157fa16a5'} is completed#033[00m Feb 1 04:57:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 889 MiB data, 3.0 GiB used, 39 GiB / 42 GiB avail; 73 KiB/s rd, 24 MiB/s wr, 110 op/s Feb 1 04:57:42 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:42.270 2 INFO neutron.agent.securitygroups_rpc [None req-cf10ff71-4763-464e-9c85-a62c4de0813a 388100543d2c4f8fb0150ffdd8da2504 674a59d5810c453484339f60db55c64e - - default default] Security group member updated ['41d73aa2-6075-4985-b34c-e67fa66518ee']#033[00m Feb 1 04:57:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:57:43 localhost dnsmasq[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/addn_hosts - 0 addresses Feb 1 04:57:43 localhost dnsmasq-dhcp[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/host Feb 1 04:57:43 localhost dnsmasq-dhcp[311666]: read /var/lib/neutron/dhcp/c94cfbe2-b38a-4f65-b5ff-344bf4929a50/opts Feb 1 04:57:43 localhost podman[311843]: 2026-02-01 09:57:43.879343545 +0000 UTC m=+0.051361302 container kill 2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c94cfbe2-b38a-4f65-b5ff-344bf4929a50, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 1 04:57:44 localhost nova_compute[274317]: 2026-02-01 09:57:44.060 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:44 localhost kernel: device tapff2b3531-69 left promiscuous mode Feb 1 04:57:44 localhost ovn_controller[152787]: 2026-02-01T09:57:44Z|00189|binding|INFO|Releasing lport ff2b3531-69db-424e-a495-69e43824d008 from this chassis (sb_readonly=0) Feb 1 04:57:44 localhost ovn_controller[152787]: 2026-02-01T09:57:44Z|00190|binding|INFO|Setting lport ff2b3531-69db-424e-a495-69e43824d008 down in Southbound Feb 1 04:57:44 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:44.068 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-c94cfbe2-b38a-4f65-b5ff-344bf4929a50', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c94cfbe2-b38a-4f65-b5ff-344bf4929a50', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e1ea1a33e554968ba8ebaf6753c9c5d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=049a24bb-789a-44cb-8aa4-57bf18fabc72, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ff2b3531-69db-424e-a495-69e43824d008) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:57:44 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:44.070 158655 INFO neutron.agent.ovn.metadata.agent [-] Port ff2b3531-69db-424e-a495-69e43824d008 in datapath c94cfbe2-b38a-4f65-b5ff-344bf4929a50 unbound from our chassis#033[00m Feb 1 04:57:44 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:44.074 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c94cfbe2-b38a-4f65-b5ff-344bf4929a50, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:57:44 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:44.075 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ae4823-d30e-4b8d-8808-997cf4175e62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:57:44 localhost nova_compute[274317]: 2026-02-01 09:57:44.083 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 69 KiB/s rd, 36 MiB/s wr, 107 op/s Feb 1 04:57:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:45 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:45.133 2 INFO neutron.agent.securitygroups_rpc [None req-1b4acf80-44e4-4b72-89ee-2b772b9e0127 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:45 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:45.712 2 INFO neutron.agent.securitygroups_rpc [None req-83cc4c41-7e96-4a99-8728-f0acec1b6354 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:46 localhost nova_compute[274317]: 2026-02-01 09:57:46.198 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 55 KiB/s rd, 29 MiB/s wr, 86 op/s Feb 1 04:57:47 localhost nova_compute[274317]: 2026-02-01 09:57:47.147 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:47 localhost systemd[1]: tmp-crun.kYsI3s.mount: Deactivated successfully. Feb 1 04:57:47 localhost dnsmasq[311666]: exiting on receipt of SIGTERM Feb 1 04:57:47 localhost podman[311882]: 2026-02-01 09:57:47.243827533 +0000 UTC m=+0.078386210 container kill 2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c94cfbe2-b38a-4f65-b5ff-344bf4929a50, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:57:47 localhost systemd[1]: libpod-2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb.scope: Deactivated successfully. Feb 1 04:57:47 localhost podman[311896]: 2026-02-01 09:57:47.316871256 +0000 UTC m=+0.053368174 container died 2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c94cfbe2-b38a-4f65-b5ff-344bf4929a50, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 1 04:57:47 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:47.323 2 INFO neutron.agent.securitygroups_rpc [req-501f4cee-4307-45db-b319-f1b9ce6bf1c9 req-e1bd36d6-c912-4d1e-9e97-4f42b13c68e6 afad352e9d664799bf5de0cadcf3c7cd ff200d66c230435098f5a0489bf1e8f7 - - default default] Security group member updated ['95400daf-a74d-4007-ac5f-e79aa8e5c1cd']#033[00m Feb 1 04:57:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb-userdata-shm.mount: Deactivated successfully. Feb 1 04:57:47 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:47.366 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:46Z, description=, device_id=125e04fe-9d17-4c49-90a0-ac05d2f548c1, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5da6f481-393c-409c-8dea-40614079a5c1, ip_allocation=immediate, mac_address=fa:16:3e:74:df:91, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:56:48Z, description=, dns_domain=, id=c3e71f40-156c-4217-bedf-836f04a8f728, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-2085708237-network, port_security_enabled=True, project_id=ff200d66c230435098f5a0489bf1e8f7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55349, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2036, status=ACTIVE, subnets=['098397c5-98ca-4cc3-a654-3c1e4a604734'], tags=[], tenant_id=ff200d66c230435098f5a0489bf1e8f7, updated_at=2026-02-01T09:56:50Z, vlan_transparent=None, network_id=c3e71f40-156c-4217-bedf-836f04a8f728, port_security_enabled=True, project_id=ff200d66c230435098f5a0489bf1e8f7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['95400daf-a74d-4007-ac5f-e79aa8e5c1cd'], standard_attr_id=2322, status=DOWN, tags=[], tenant_id=ff200d66c230435098f5a0489bf1e8f7, updated_at=2026-02-01T09:57:47Z on network c3e71f40-156c-4217-bedf-836f04a8f728#033[00m Feb 1 04:57:47 localhost podman[311896]: 2026-02-01 09:57:47.389626691 +0000 UTC m=+0.126123569 container remove 2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c94cfbe2-b38a-4f65-b5ff-344bf4929a50, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:57:47 localhost systemd[1]: libpod-conmon-2b51db5aee8e443f525a345d92130e93dd36d8776f37c21e9b109f623ca160cb.scope: Deactivated successfully. Feb 1 04:57:47 localhost dnsmasq[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/addn_hosts - 2 addresses Feb 1 04:57:47 localhost podman[311940]: 2026-02-01 09:57:47.725853489 +0000 UTC m=+0.063315053 container kill 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 04:57:47 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/host Feb 1 04:57:47 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/opts Feb 1 04:57:47 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:47.988 259225 INFO neutron.agent.dhcp.agent [None req-93010f84-a8da-46e7-b2dd-7877f51a21a5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:48 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:48.003 259225 INFO neutron.agent.dhcp.agent [None req-0899b159-fde9-485f-bebc-e4248be0beac - - - - - -] DHCP configuration for ports {'5da6f481-393c-409c-8dea-40614079a5c1'} is completed#033[00m Feb 1 04:57:48 localhost systemd[1]: var-lib-containers-storage-overlay-be4dd53ceceb959b546f67efa71b185264c4777498bb167cd87c7cd7f3c9fe8d-merged.mount: Deactivated successfully. Feb 1 04:57:48 localhost systemd[1]: run-netns-qdhcp\x2dc94cfbe2\x2db38a\x2d4f65\x2db5ff\x2d344bf4929a50.mount: Deactivated successfully. Feb 1 04:57:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 1.2 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 27 KiB/s rd, 29 MiB/s wr, 50 op/s Feb 1 04:57:48 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:48.332 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e167 e167: 6 total, 6 up, 6 in Feb 1 04:57:48 localhost nova_compute[274317]: 2026-02-01 09:57:48.857 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:49 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:49.050 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005604213.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:57:46Z, description=, device_id=125e04fe-9d17-4c49-90a0-ac05d2f548c1, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-volumesbackupstest-instance-1295046314, extra_dhcp_opts=[], fixed_ips=[], id=5da6f481-393c-409c-8dea-40614079a5c1, ip_allocation=immediate, mac_address=fa:16:3e:74:df:91, name=, network_id=c3e71f40-156c-4217-bedf-836f04a8f728, port_security_enabled=True, project_id=ff200d66c230435098f5a0489bf1e8f7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['95400daf-a74d-4007-ac5f-e79aa8e5c1cd'], standard_attr_id=2322, status=DOWN, tags=[], tenant_id=ff200d66c230435098f5a0489bf1e8f7, updated_at=2026-02-01T09:57:48Z on network c3e71f40-156c-4217-bedf-836f04a8f728#033[00m Feb 1 04:57:49 localhost systemd[1]: tmp-crun.nUhgxs.mount: Deactivated successfully. Feb 1 04:57:49 localhost dnsmasq[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/addn_hosts - 2 addresses Feb 1 04:57:49 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/host Feb 1 04:57:49 localhost podman[311978]: 2026-02-01 09:57:49.28857796 +0000 UTC m=+0.074370765 container kill 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3) Feb 1 04:57:49 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/opts Feb 1 04:57:49 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:49.528 259225 INFO neutron.agent.dhcp.agent [None req-c34df685-afcd-46cf-ae8c-06988204d5f1 - - - - - -] DHCP configuration for ports {'5da6f481-393c-409c-8dea-40614079a5c1'} is completed#033[00m Feb 1 04:57:50 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 04:57:50 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2225 writes, 22K keys, 2225 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.07 MB/s#012Cumulative WAL: 2225 writes, 2225 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2225 writes, 22K keys, 2225 commit groups, 1.0 writes per commit group, ingest: 41.36 MB, 0.07 MB/s#012Interval WAL: 2225 writes, 2225 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 197.5 0.16 0.07 8 0.020 0 0 0.0 0.0#012 L6 1/0 18.56 MB 0.0 0.2 0.0 0.1 0.1 0.0 0.0 4.4 167.5 152.7 0.93 0.36 7 0.133 89K 3435 0.0 0.0#012 Sum 1/0 18.56 MB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 5.4 142.5 159.4 1.09 0.42 15 0.073 89K 3435 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 5.4 142.8 159.8 1.09 0.42 14 0.078 89K 3435 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.2 0.0 0.1 0.1 0.0 0.0 0.0 167.5 152.7 0.93 0.36 7 0.133 89K 3435 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 200.5 0.16 0.07 7 0.023 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.032#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.29 MB/s write, 0.15 GB read, 0.26 MB/s read, 1.1 seconds#012Interval compaction: 0.17 GB write, 0.29 MB/s write, 0.15 GB read, 0.26 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ae85ff1f0#2 capacity: 308.00 MB usage: 15.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000121 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(682,14.67 MB,4.76337%) FilterBlock(15,267.42 KB,0.0847903%) IndexBlock(15,350.02 KB,0.110978%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Feb 1 04:57:50 localhost nova_compute[274317]: 2026-02-01 09:57:50.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:57:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 1 04:57:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 1.2 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 32 KiB/s rd, 34 MiB/s wr, 59 op/s Feb 1 04:57:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e168 e168: 6 total, 6 up, 6 in Feb 1 04:57:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:50.818 2 INFO neutron.agent.securitygroups_rpc [None req-c7462549-dc03-49f9-bdc5-6b707b180a08 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:51 localhost nova_compute[274317]: 2026-02-01 09:57:51.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:57:51 localhost nova_compute[274317]: 2026-02-01 09:57:51.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:57:51 localhost nova_compute[274317]: 2026-02-01 09:57:51.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:57:51 localhost nova_compute[274317]: 2026-02-01 09:57:51.120 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:57:51 localhost nova_compute[274317]: 2026-02-01 09:57:51.200 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:57:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:57:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:57:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:57:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e169 e169: 6 total, 6 up, 6 in Feb 1 04:57:51 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:51.496 2 INFO neutron.agent.securitygroups_rpc [None req-f8e6f518-1a93-44d5-bed3-15bc3df0d353 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:57:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:57:51 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:51.513 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:51 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:51.767 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:3c:bc 2001:db8:0:1:f816:3eff:fe29:3cbc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6df31c80-e655-4133-9c32-9708470a03c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d4bc4012-7c81-4a7f-9a67-f9545d549873) old=Port_Binding(mac=['fa:16:3e:29:3c:bc 2001:db8::f816:3eff:fe29:3cbc'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe29:3cbc/64', 'neutron:device_id': 'ovnmeta-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cba39058-6a05-4f77-add1-57334b728a66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe5c9037c1c44846b3c840cd81d7f177', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:57:51 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:51.769 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d4bc4012-7c81-4a7f-9a67-f9545d549873 in datapath cba39058-6a05-4f77-add1-57334b728a66 updated#033[00m Feb 1 04:57:51 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:51.773 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cba39058-6a05-4f77-add1-57334b728a66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:57:51 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:51.774 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[73f3040c-27cb-441c-9ee8-b680be735486]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:57:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:57:52 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3426990219' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:57:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:57:52 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3426990219' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:57:52 localhost nova_compute[274317]: 2026-02-01 09:57:52.171 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 1.2 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 30 KiB/s rd, 26 MiB/s wr, 56 op/s Feb 1 04:57:52 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:52.592 2 INFO neutron.agent.securitygroups_rpc [None req-78a64591-c841-4e75-af76-a7af0cedc758 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:57:52 localhost systemd[1]: tmp-crun.xUARQD.mount: Deactivated successfully. Feb 1 04:57:52 localhost podman[311999]: 2026-02-01 09:57:52.887221524 +0000 UTC m=+0.090589308 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:57:52 localhost podman[311998]: 2026-02-01 09:57:52.854363776 +0000 UTC m=+0.063297633 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3) Feb 1 04:57:52 localhost podman[311999]: 2026-02-01 09:57:52.924169359 +0000 UTC m=+0.127537123 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:57:52 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:57:52 localhost podman[311998]: 2026-02-01 09:57:52.939669089 +0000 UTC m=+0.148602966 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127) Feb 1 04:57:52 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.118 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.118 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.119 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.119 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.120 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:57:53 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:53.415 2 INFO neutron.agent.securitygroups_rpc [None req-ff2643e0-e24a-4f9b-a883-5340b9397f69 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:57:53 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3624975071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.543 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:57:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e170 e170: 6 total, 6 up, 6 in Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.748 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.751 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11628MB free_disk=41.774723052978516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.752 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.754 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.820 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.821 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:57:53 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:53.825 2 INFO neutron.agent.securitygroups_rpc [None req-865e7db1-f37b-4e27-b7e7-fae9537a70ac 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.842 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 04:57:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:53.846 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.861 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.861 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.875 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 04:57:53 localhost podman[312084]: 2026-02-01 09:57:53.908117477 +0000 UTC m=+0.061450356 container kill 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:57:53 localhost dnsmasq[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/addn_hosts - 0 addresses Feb 1 04:57:53 localhost dnsmasq-dhcp[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/host Feb 1 04:57:53 localhost dnsmasq-dhcp[310963]: read /var/lib/neutron/dhcp/0ac2ccf3-74d8-4f0a-903f-4cf43406d18d/opts Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.909 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 04:57:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:57:53 localhost nova_compute[274317]: 2026-02-01 09:57:53.925 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:57:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:57:54 localhost podman[312099]: 2026-02-01 09:57:54.052428018 +0000 UTC m=+0.114625042 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, release=1769056855, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, version=9.7) Feb 1 04:57:54 localhost podman[312099]: 2026-02-01 09:57:54.062726017 +0000 UTC m=+0.124922991 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, managed_by=edpm_ansible, config_id=openstack_network_exporter, vcs-type=git, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1769056855, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 1 04:57:54 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:57:54 localhost ovn_controller[152787]: 2026-02-01T09:57:54Z|00191|binding|INFO|Releasing lport 70e8c4ee-b7bf-45c9-80c5-43450e09967e from this chassis (sb_readonly=0) Feb 1 04:57:54 localhost kernel: device tap70e8c4ee-b7 left promiscuous mode Feb 1 04:57:54 localhost ovn_controller[152787]: 2026-02-01T09:57:54Z|00192|binding|INFO|Setting lport 70e8c4ee-b7bf-45c9-80c5-43450e09967e down in Southbound Feb 1 04:57:54 localhost nova_compute[274317]: 2026-02-01 09:57:54.093 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:54.108 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9279ffc0dc2f48079045ce3d49e21210', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2934a88b-2cb8-43fc-bc4a-0266d2f826b9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=70e8c4ee-b7bf-45c9-80c5-43450e09967e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:57:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:54.110 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 70e8c4ee-b7bf-45c9-80c5-43450e09967e in datapath 0ac2ccf3-74d8-4f0a-903f-4cf43406d18d unbound from our chassis#033[00m Feb 1 04:57:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:54.115 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:57:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:57:54.116 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[81382063-417f-46e6-a096-2bc6be446403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:57:54 localhost nova_compute[274317]: 2026-02-01 09:57:54.116 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:54 localhost podman[312135]: 2026-02-01 09:57:54.139998952 +0000 UTC m=+0.081763585 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:57:54 localhost podman[312135]: 2026-02-01 09:57:54.14867041 +0000 UTC m=+0.090435043 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Feb 1 04:57:54 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:57:54 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:54.228 2 INFO neutron.agent.securitygroups_rpc [None req-ba46717f-c9e1-458a-88f8-c050502ffc34 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 192 MiB data, 1018 MiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 1.4 MiB/s wr, 156 op/s Feb 1 04:57:54 localhost nova_compute[274317]: 2026-02-01 09:57:54.391 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:57:54 localhost nova_compute[274317]: 2026-02-01 09:57:54.397 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:57:54 localhost nova_compute[274317]: 2026-02-01 09:57:54.418 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:57:54 localhost nova_compute[274317]: 2026-02-01 09:57:54.445 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:57:54 localhost nova_compute[274317]: 2026-02-01 09:57:54.446 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:57:54 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:54.590 2 INFO neutron.agent.securitygroups_rpc [None req-6fa3fe0c-2f0b-4fce-bb11-9a1bc41c0c58 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:57:55 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:55.011 2 INFO neutron.agent.securitygroups_rpc [None req-021b4e45-6986-46a7-9869-b4d11b35b6ad 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:57:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:55.030 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:57:55 localhost nova_compute[274317]: 2026-02-01 09:57:55.446 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:57:55 localhost nova_compute[274317]: 2026-02-01 09:57:55.447 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:57:55 localhost nova_compute[274317]: 2026-02-01 09:57:55.448 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:57:55 localhost nova_compute[274317]: 2026-02-01 09:57:55.448 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:57:55 localhost nova_compute[274317]: 2026-02-01 09:57:55.448 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:57:56 localhost nova_compute[274317]: 2026-02-01 09:57:56.098 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:56 localhost nova_compute[274317]: 2026-02-01 09:57:56.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:57:56 localhost nova_compute[274317]: 2026-02-01 09:57:56.202 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:56 localhost neutron_sriov_agent[252054]: 2026-02-01 09:57:56.235 2 INFO neutron.agent.securitygroups_rpc [None req-bf304f40-e466-4d37-a7c0-f4cca9d82926 c808dfb9cb284e60ac814aa25eae5d58 3e1ea1a33e554968ba8ebaf6753c9c5d - - default default] Security group member updated ['7af9328f-e889-4487-9888-9c5f8b1745d9']#033[00m Feb 1 04:57:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:57:56 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1939286376' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:57:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 192 MiB data, 1018 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 1.4 MiB/s wr, 153 op/s Feb 1 04:57:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:57:56 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1939286376' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:57:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e171 e171: 6 total, 6 up, 6 in Feb 1 04:57:56 localhost dnsmasq[310963]: exiting on receipt of SIGTERM Feb 1 04:57:56 localhost systemd[1]: libpod-6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c.scope: Deactivated successfully. Feb 1 04:57:56 localhost podman[312184]: 2026-02-01 09:57:56.790492557 +0000 UTC m=+0.071566069 container kill 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:57:56 localhost podman[312196]: 2026-02-01 09:57:56.863951963 +0000 UTC m=+0.060211596 container died 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:57:56 localhost podman[312196]: 2026-02-01 09:57:56.894042736 +0000 UTC m=+0.090302329 container cleanup 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:57:56 localhost systemd[1]: libpod-conmon-6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c.scope: Deactivated successfully. Feb 1 04:57:56 localhost podman[312198]: 2026-02-01 09:57:56.932481827 +0000 UTC m=+0.120486644 container remove 6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ac2ccf3-74d8-4f0a-903f-4cf43406d18d, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:57:57 localhost nova_compute[274317]: 2026-02-01 09:57:57.198 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:57 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:57.306 259225 INFO neutron.agent.dhcp.agent [None req-fca1c2ab-09be-4ffa-ad47-b60b40dfc27d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:57 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:57.308 259225 INFO neutron.agent.dhcp.agent [None req-fca1c2ab-09be-4ffa-ad47-b60b40dfc27d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:57 localhost systemd[1]: var-lib-containers-storage-overlay-a141c608be1b875224d2f7067e777389f3126fc9994644b0ea89131e8d650861-merged.mount: Deactivated successfully. Feb 1 04:57:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6eea0caba832f1892b9044d8e2489846e1759460e40b2c886b19aed32c48bb3c-userdata-shm.mount: Deactivated successfully. Feb 1 04:57:57 localhost systemd[1]: run-netns-qdhcp\x2d0ac2ccf3\x2d74d8\x2d4f0a\x2d903f\x2d4cf43406d18d.mount: Deactivated successfully. Feb 1 04:57:57 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:57:57.852 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:57:57 localhost nova_compute[274317]: 2026-02-01 09:57:57.946 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:57 localhost nova_compute[274317]: 2026-02-01 09:57:57.996 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:58 localhost nova_compute[274317]: 2026-02-01 09:57:58.019 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:57:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 289 op/s Feb 1 04:58:00 localhost podman[236852]: time="2026-02-01T09:58:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:58:00 localhost podman[236852]: @ - - [01/Feb/2026:09:58:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157180 "" "Go-http-client/1.1" Feb 1 04:58:00 localhost podman[236852]: @ - - [01/Feb/2026:09:58:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18777 "" "Go-http-client/1.1" Feb 1 04:58:00 localhost nova_compute[274317]: 2026-02-01 09:58:00.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:00 localhost nova_compute[274317]: 2026-02-01 09:58:00.117 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 1.0 MiB/s wr, 245 op/s Feb 1 04:58:01 localhost nova_compute[274317]: 2026-02-01 09:58:01.204 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:01 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:01.248 2 INFO neutron.agent.securitygroups_rpc [None req-9c92fd3b-2244-4a02-b891-f277532d3dc4 c808dfb9cb284e60ac814aa25eae5d58 3e1ea1a33e554968ba8ebaf6753c9c5d - - default default] Security group member updated ['7af9328f-e889-4487-9888-9c5f8b1745d9']#033[00m Feb 1 04:58:01 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:01.558 2 INFO neutron.agent.securitygroups_rpc [None req-eadca791-9490-47ec-9527-60ebe2a9b958 e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:58:01 localhost openstack_network_exporter[239388]: ERROR 09:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:58:01 localhost openstack_network_exporter[239388]: Feb 1 04:58:01 localhost openstack_network_exporter[239388]: ERROR 09:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:58:01 localhost openstack_network_exporter[239388]: Feb 1 04:58:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e172 e172: 6 total, 6 up, 6 in Feb 1 04:58:02 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:02.000 2 INFO neutron.agent.securitygroups_rpc [None req-825e2b67-68b3-4de9-af8c-c04099a8e61e e0ee367368fd4fbebf2e13aa0ff98129 fe5c9037c1c44846b3c840cd81d7f177 - - default default] Security group member updated ['3438fec4-12ca-4b88-8e3d-decadab8f7bf']#033[00m Feb 1 04:58:02 localhost nova_compute[274317]: 2026-02-01 09:58:02.239 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 1.1 KiB/s wr, 130 op/s Feb 1 04:58:02 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:02.665 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:be:37 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-f90b2d3c-17ac-4074-8e52-3a58738705b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f90b2d3c-17ac-4074-8e52-3a58738705b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b17a2b9-5e93-4788-90e6-3eea4883a111, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a21a9b5e-c616-4953-aa12-b45630ee9601) old=Port_Binding(mac=['fa:16:3e:f3:be:37 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-f90b2d3c-17ac-4074-8e52-3a58738705b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f90b2d3c-17ac-4074-8e52-3a58738705b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:02 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:02.667 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a21a9b5e-c616-4953-aa12-b45630ee9601 in datapath f90b2d3c-17ac-4074-8e52-3a58738705b1 updated#033[00m Feb 1 04:58:02 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:02.670 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f90b2d3c-17ac-4074-8e52-3a58738705b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:58:02 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:02.671 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[effcda5e-b00a-48e7-9cd6-7a5990be0057]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:58:02 localhost systemd[1]: tmp-crun.ytLvfd.mount: Deactivated successfully. Feb 1 04:58:02 localhost podman[312225]: 2026-02-01 09:58:02.858611948 +0000 UTC m=+0.073180528 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:02 localhost podman[312225]: 2026-02-01 09:58:02.896680248 +0000 UTC m=+0.111248788 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 1 04:58:02 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:58:03 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e173 e173: 6 total, 6 up, 6 in Feb 1 04:58:03 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:03.774 2 INFO neutron.agent.securitygroups_rpc [None req-6f36bda7-f8b8-46c7-a4c3-95b983979dc7 388100543d2c4f8fb0150ffdd8da2504 674a59d5810c453484339f60db55c64e - - default default] Security group member updated ['41d73aa2-6075-4985-b34c-e67fa66518ee']#033[00m Feb 1 04:58:04 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:04.027 2 INFO neutron.agent.securitygroups_rpc [None req-752af62a-19da-4b3f-a3e8-9a1412f9f50e 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 1.7 KiB/s wr, 146 op/s Feb 1 04:58:04 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e174 e174: 6 total, 6 up, 6 in Feb 1 04:58:05 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:05.004 2 INFO neutron.agent.securitygroups_rpc [None req-dcf8ea11-57cc-44a8-b32b-d084e8cc9746 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:05 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:05.045 2 INFO neutron.agent.securitygroups_rpc [None req-6f8cb305-6336-427d-a0d9-37ed7bed8449 388100543d2c4f8fb0150ffdd8da2504 674a59d5810c453484339f60db55c64e - - default default] Security group member updated ['41d73aa2-6075-4985-b34c-e67fa66518ee']#033[00m Feb 1 04:58:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "57e9d5dc-73a6-45c8-a219-7bfb6963c354", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 04:58:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57e9d5dc-73a6-45c8-a219-7bfb6963c354, vol_name:cephfs) < "" Feb 1 04:58:05 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:05.059+0000 7f93ec23e640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:05.059+0000 7f93ec23e640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:05.059+0000 7f93ec23e640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:05.059+0000 7f93ec23e640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:05.059+0000 7f93ec23e640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57e9d5dc-73a6-45c8-a219-7bfb6963c354/.meta.tmp' Feb 1 04:58:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57e9d5dc-73a6-45c8-a219-7bfb6963c354/.meta.tmp' to config b'/volumes/_nogroup/57e9d5dc-73a6-45c8-a219-7bfb6963c354/.meta' Feb 1 04:58:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57e9d5dc-73a6-45c8-a219-7bfb6963c354, vol_name:cephfs) < "" Feb 1 04:58:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57e9d5dc-73a6-45c8-a219-7bfb6963c354", "format": "json"}]: dispatch Feb 1 04:58:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57e9d5dc-73a6-45c8-a219-7bfb6963c354, vol_name:cephfs) < "" Feb 1 04:58:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57e9d5dc-73a6-45c8-a219-7bfb6963c354, vol_name:cephfs) < "" Feb 1 04:58:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e175 e175: 6 total, 6 up, 6 in Feb 1 04:58:05 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:05.879 2 INFO neutron.agent.securitygroups_rpc [None req-cf4e6267-50f2-41a1-bdc4-48a2e39e61cf 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:06 localhost nova_compute[274317]: 2026-02-01 09:58:06.206 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 341 KiB/s rd, 899 B/s wr, 13 op/s Feb 1 04:58:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:58:06 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:06.804 2 INFO neutron.agent.securitygroups_rpc [None req-b303317c-4287-410b-804b-7e395b86e859 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:06 localhost podman[312257]: 2026-02-01 09:58:06.823158279 +0000 UTC m=+0.083763376 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:58:06 localhost podman[312257]: 2026-02-01 09:58:06.834725398 +0000 UTC m=+0.095330505 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:58:06 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:58:07 localhost nova_compute[274317]: 2026-02-01 09:58:07.274 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:07 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e176 e176: 6 total, 6 up, 6 in Feb 1 04:58:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 217 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 953 KiB/s rd, 5.4 MiB/s wr, 228 op/s Feb 1 04:58:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57e9d5dc-73a6-45c8-a219-7bfb6963c354", "format": "json"}]: dispatch Feb 1 04:58:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57e9d5dc-73a6-45c8-a219-7bfb6963c354, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:58:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57e9d5dc-73a6-45c8-a219-7bfb6963c354, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.652+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57e9d5dc-73a6-45c8-a219-7bfb6963c354' of type subvolume Feb 1 04:58:08 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57e9d5dc-73a6-45c8-a219-7bfb6963c354' of type subvolume Feb 1 04:58:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57e9d5dc-73a6-45c8-a219-7bfb6963c354", "force": true, "format": "json"}]: dispatch Feb 1 04:58:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57e9d5dc-73a6-45c8-a219-7bfb6963c354, vol_name:cephfs) < "" Feb 1 04:58:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57e9d5dc-73a6-45c8-a219-7bfb6963c354'' moved to trashcan Feb 1 04:58:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:58:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57e9d5dc-73a6-45c8-a219-7bfb6963c354, vol_name:cephfs) < "" Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.684+0000 7f93eea43640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.684+0000 7f93eea43640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.684+0000 7f93eea43640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.684+0000 7f93eea43640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.684+0000 7f93eea43640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.713+0000 7f93eda41640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.713+0000 7f93eda41640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.713+0000 7f93eda41640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.713+0000 7f93eda41640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:08.713+0000 7f93eda41640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 04:58:08 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e177 e177: 6 total, 6 up, 6 in Feb 1 04:58:09 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e178 e178: 6 total, 6 up, 6 in Feb 1 04:58:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 217 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 972 KiB/s rd, 5.5 MiB/s wr, 233 op/s Feb 1 04:58:11 localhost nova_compute[274317]: 2026-02-01 09:58:11.208 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:11 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:11.279 2 INFO neutron.agent.securitygroups_rpc [None req-2e636825-6352-4de3-92a6-2082180ce0f9 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:11 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:11.442 2 INFO neutron.agent.securitygroups_rpc [None req-0b3ddafe-7058-48e9-ade9-6122faaa4a98 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:11 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:11.790 259225 INFO neutron.agent.linux.ip_lib [None req-93a17f9e-594d-4b12-bd0f-6a081cf91730 - - - - - -] Device tapb401a566-f9 cannot be used as it has no MAC address#033[00m Feb 1 04:58:11 localhost nova_compute[274317]: 2026-02-01 09:58:11.812 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:11 localhost kernel: device tapb401a566-f9 entered promiscuous mode Feb 1 04:58:11 localhost NetworkManager[5972]: [1769939891.8226] manager: (tapb401a566-f9): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Feb 1 04:58:11 localhost nova_compute[274317]: 2026-02-01 09:58:11.822 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:11 localhost ovn_controller[152787]: 2026-02-01T09:58:11Z|00193|binding|INFO|Claiming lport b401a566-f92c-44aa-86ca-bb673a1a49df for this chassis. Feb 1 04:58:11 localhost ovn_controller[152787]: 2026-02-01T09:58:11Z|00194|binding|INFO|b401a566-f92c-44aa-86ca-bb673a1a49df: Claiming unknown Feb 1 04:58:11 localhost systemd-udevd[312314]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:58:11 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:11.835 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-fd0f2c71-aa97-4b39-b751-c91f8ed96a20', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd0f2c71-aa97-4b39-b751-c91f8ed96a20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1a79666-54fd-413d-b574-80dec3e84f3c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b401a566-f92c-44aa-86ca-bb673a1a49df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:11 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:11.837 158655 INFO neutron.agent.ovn.metadata.agent [-] Port b401a566-f92c-44aa-86ca-bb673a1a49df in datapath fd0f2c71-aa97-4b39-b751-c91f8ed96a20 bound to our chassis#033[00m Feb 1 04:58:11 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:11.839 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 91f0b00b-52ea-4ae1-b321-59487fbf888e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:58:11 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:11.840 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd0f2c71-aa97-4b39-b751-c91f8ed96a20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:58:11 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:11.841 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[91bbd165-8b3b-4925-b686-2920868a9136]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:11 localhost journal[224955]: ethtool ioctl error on tapb401a566-f9: No such device Feb 1 04:58:11 localhost journal[224955]: ethtool ioctl error on tapb401a566-f9: No such device Feb 1 04:58:11 localhost nova_compute[274317]: 2026-02-01 09:58:11.863 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:11 localhost journal[224955]: ethtool ioctl error on tapb401a566-f9: No such device Feb 1 04:58:11 localhost ovn_controller[152787]: 2026-02-01T09:58:11Z|00195|binding|INFO|Setting lport b401a566-f92c-44aa-86ca-bb673a1a49df ovn-installed in OVS Feb 1 04:58:11 localhost ovn_controller[152787]: 2026-02-01T09:58:11Z|00196|binding|INFO|Setting lport b401a566-f92c-44aa-86ca-bb673a1a49df up in Southbound Feb 1 04:58:11 localhost nova_compute[274317]: 2026-02-01 09:58:11.868 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:11 localhost journal[224955]: ethtool ioctl error on tapb401a566-f9: No such device Feb 1 04:58:11 localhost journal[224955]: ethtool ioctl error on tapb401a566-f9: No such device Feb 1 04:58:11 localhost journal[224955]: ethtool ioctl error on tapb401a566-f9: No such device Feb 1 04:58:11 localhost journal[224955]: ethtool ioctl error on tapb401a566-f9: No such device Feb 1 04:58:11 localhost journal[224955]: ethtool ioctl error on tapb401a566-f9: No such device Feb 1 04:58:11 localhost nova_compute[274317]: 2026-02-01 09:58:11.903 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:11 localhost nova_compute[274317]: 2026-02-01 09:58:11.929 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:12 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:12.027 2 INFO neutron.agent.securitygroups_rpc [None req-4918b331-88ae-4de3-8570-b8490451d4d3 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 217 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 716 KiB/s rd, 4.1 MiB/s wr, 171 op/s Feb 1 04:58:12 localhost nova_compute[274317]: 2026-02-01 09:58:12.298 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:12 localhost podman[312385]: Feb 1 04:58:12 localhost podman[312385]: 2026-02-01 09:58:12.793332004 +0000 UTC m=+0.090236506 container create 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:12 localhost systemd[1]: Started libpod-conmon-4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf.scope. Feb 1 04:58:12 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:12.845 2 INFO neutron.agent.securitygroups_rpc [None req-c4f766e3-6d2a-4be1-a25c-795440958939 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:12 localhost podman[312385]: 2026-02-01 09:58:12.751726856 +0000 UTC m=+0.048631428 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:12 localhost systemd[1]: Started libcrun container. Feb 1 04:58:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21c7157689accf0627dec7ac41e1a6a7bb79ef190b9ceae4d14af1a2c64b5d83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:12 localhost podman[312385]: 2026-02-01 09:58:12.871425524 +0000 UTC m=+0.168330036 container init 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:12 localhost podman[312385]: 2026-02-01 09:58:12.880975451 +0000 UTC m=+0.177879953 container start 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:58:12 localhost dnsmasq[312404]: started, version 2.85 cachesize 150 Feb 1 04:58:12 localhost dnsmasq[312404]: DNS service limited to local subnets Feb 1 04:58:12 localhost dnsmasq[312404]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:12 localhost dnsmasq[312404]: warning: no upstream servers configured Feb 1 04:58:12 localhost dnsmasq-dhcp[312404]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:58:12 localhost dnsmasq[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/addn_hosts - 0 addresses Feb 1 04:58:12 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/host Feb 1 04:58:12 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/opts Feb 1 04:58:12 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:12.939 259225 INFO neutron.agent.dhcp.agent [None req-8da746e0-d7aa-4991-abe2-3ef940dd82e9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:58:11Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=24e9fcfa-968e-4b3e-9010-3ad066cf1940, ip_allocation=immediate, mac_address=fa:16:3e:a5:88:44, name=tempest-PortsTestJSON-1760108498, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:58:09Z, description=, dns_domain=, id=fd0f2c71-aa97-4b39-b751-c91f8ed96a20, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-988107197, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50137, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2381, status=ACTIVE, subnets=['752bc011-3910-4afc-b3cd-dd2d12938ecb'], tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:10Z, vlan_transparent=None, network_id=fd0f2c71-aa97-4b39-b751-c91f8ed96a20, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['277d73b7-d267-437d-b5df-bd560d180a7a'], standard_attr_id=2395, status=DOWN, tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:11Z on network fd0f2c71-aa97-4b39-b751-c91f8ed96a20#033[00m Feb 1 04:58:12 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e179 e179: 6 total, 6 up, 6 in Feb 1 04:58:13 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:13.119 259225 INFO neutron.agent.dhcp.agent [None req-e4492cc7-bcc7-40a4-a219-ea28a02989d8 - - - - - -] DHCP configuration for ports {'ca31c6b2-9a19-43e6-9856-38c7b199a032'} is completed#033[00m Feb 1 04:58:13 localhost dnsmasq[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/addn_hosts - 1 addresses Feb 1 04:58:13 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/host Feb 1 04:58:13 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/opts Feb 1 04:58:13 localhost podman[312422]: 2026-02-01 09:58:13.252227733 +0000 UTC m=+0.059502474 container kill 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:58:13 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:13.351 2 INFO neutron.agent.securitygroups_rpc [None req-07f0d5e0-07b1-4081-9883-e9d75a91fc18 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:13 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:13.392 259225 INFO neutron.agent.dhcp.agent [None req-ad122b0b-ae9e-4cdc-82dc-6ad8b77d9941 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:58:11Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=95c5ff06-26c9-4d2f-b0bb-4c214ed71f24, ip_allocation=immediate, mac_address=fa:16:3e:26:75:fb, name=tempest-PortsTestJSON-1814381455, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:58:09Z, description=, dns_domain=, id=fd0f2c71-aa97-4b39-b751-c91f8ed96a20, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-988107197, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50137, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2381, status=ACTIVE, subnets=['752bc011-3910-4afc-b3cd-dd2d12938ecb'], tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:10Z, vlan_transparent=None, network_id=fd0f2c71-aa97-4b39-b751-c91f8ed96a20, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['277d73b7-d267-437d-b5df-bd560d180a7a'], standard_attr_id=2408, status=DOWN, tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:11Z on network fd0f2c71-aa97-4b39-b751-c91f8ed96a20#033[00m Feb 1 04:58:13 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:13.468 259225 INFO neutron.agent.dhcp.agent [None req-ba2f7910-494f-4cc0-a3d7-94ff2e458719 - - - - - -] DHCP configuration for ports {'24e9fcfa-968e-4b3e-9010-3ad066cf1940'} is completed#033[00m Feb 1 04:58:13 localhost dnsmasq[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/addn_hosts - 2 addresses Feb 1 04:58:13 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/host Feb 1 04:58:13 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/opts Feb 1 04:58:13 localhost podman[312462]: 2026-02-01 09:58:13.59838919 +0000 UTC m=+0.057043840 container kill 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 1 04:58:13 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:13.615 2 INFO neutron.agent.securitygroups_rpc [None req-07f0d5e0-07b1-4081-9883-e9d75a91fc18 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:13 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:13.653 259225 INFO neutron.agent.linux.ip_lib [None req-0473742d-5051-4b1d-8f7f-48321e2503c9 - - - - - -] Device tap3a91aa3a-fb cannot be used as it has no MAC address#033[00m Feb 1 04:58:13 localhost nova_compute[274317]: 2026-02-01 09:58:13.727 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:13 localhost kernel: device tap3a91aa3a-fb entered promiscuous mode Feb 1 04:58:13 localhost ovn_controller[152787]: 2026-02-01T09:58:13Z|00197|binding|INFO|Claiming lport 3a91aa3a-fb9f-4945-91e3-85f0d278b0b5 for this chassis. Feb 1 04:58:13 localhost NetworkManager[5972]: [1769939893.7337] manager: (tap3a91aa3a-fb): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Feb 1 04:58:13 localhost ovn_controller[152787]: 2026-02-01T09:58:13Z|00198|binding|INFO|3a91aa3a-fb9f-4945-91e3-85f0d278b0b5: Claiming unknown Feb 1 04:58:13 localhost nova_compute[274317]: 2026-02-01 09:58:13.735 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:13 localhost systemd-udevd[312317]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:58:13 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:13.744 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c23ed2a9641444eeac6ffb9689135326', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28f2370b-4aa7-434f-90cb-05cc01bed2bb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3a91aa3a-fb9f-4945-91e3-85f0d278b0b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:13 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:13.745 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 3a91aa3a-fb9f-4945-91e3-85f0d278b0b5 in datapath 4da937bf-f66e-48ce-bf66-f7d3d9f7bc52 bound to our chassis#033[00m Feb 1 04:58:13 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:13.747 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4da937bf-f66e-48ce-bf66-f7d3d9f7bc52 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:58:13 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:13.747 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3505d4-dfe6-41d6-bded-f2987918e060]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:13 localhost journal[224955]: ethtool ioctl error on tap3a91aa3a-fb: No such device Feb 1 04:58:13 localhost ovn_controller[152787]: 2026-02-01T09:58:13Z|00199|binding|INFO|Setting lport 3a91aa3a-fb9f-4945-91e3-85f0d278b0b5 ovn-installed in OVS Feb 1 04:58:13 localhost ovn_controller[152787]: 2026-02-01T09:58:13Z|00200|binding|INFO|Setting lport 3a91aa3a-fb9f-4945-91e3-85f0d278b0b5 up in Southbound Feb 1 04:58:13 localhost journal[224955]: ethtool ioctl error on tap3a91aa3a-fb: No such device Feb 1 04:58:13 localhost nova_compute[274317]: 2026-02-01 09:58:13.768 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:13 localhost journal[224955]: ethtool ioctl error on tap3a91aa3a-fb: No such device Feb 1 04:58:13 localhost journal[224955]: ethtool ioctl error on tap3a91aa3a-fb: No such device Feb 1 04:58:13 localhost journal[224955]: ethtool ioctl error on tap3a91aa3a-fb: No such device Feb 1 04:58:13 localhost journal[224955]: ethtool ioctl error on tap3a91aa3a-fb: No such device Feb 1 04:58:13 localhost journal[224955]: ethtool ioctl error on tap3a91aa3a-fb: No such device Feb 1 04:58:13 localhost journal[224955]: ethtool ioctl error on tap3a91aa3a-fb: No such device Feb 1 04:58:13 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:13.862 2 INFO neutron.agent.securitygroups_rpc [None req-b64013eb-118b-4ee2-9fc4-6cc3b98c578e 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:13 localhost nova_compute[274317]: 2026-02-01 09:58:13.905 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:13 localhost nova_compute[274317]: 2026-02-01 09:58:13.929 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:14 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:14.005 259225 INFO neutron.agent.dhcp.agent [None req-3d38efc9-8679-40f9-8a62-ff441f1be388 - - - - - -] DHCP configuration for ports {'95c5ff06-26c9-4d2f-b0bb-4c214ed71f24'} is completed#033[00m Feb 1 04:58:14 localhost dnsmasq[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/addn_hosts - 1 addresses Feb 1 04:58:14 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/host Feb 1 04:58:14 localhost podman[312542]: 2026-02-01 09:58:14.191724714 +0000 UTC m=+0.069812604 container kill 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:58:14 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/opts Feb 1 04:58:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 132 KiB/s rd, 191 KiB/s wr, 99 op/s Feb 1 04:58:14 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:14.503 2 INFO neutron.agent.securitygroups_rpc [None req-ff9a1527-b1c3-4b84-beeb-30758949e010 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:14 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:14.526 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:14 localhost systemd[1]: tmp-crun.E0T6mI.mount: Deactivated successfully. Feb 1 04:58:14 localhost dnsmasq[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/addn_hosts - 0 addresses Feb 1 04:58:14 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/host Feb 1 04:58:14 localhost dnsmasq-dhcp[312404]: read /var/lib/neutron/dhcp/fd0f2c71-aa97-4b39-b751-c91f8ed96a20/opts Feb 1 04:58:14 localhost podman[312593]: 2026-02-01 09:58:14.572846033 +0000 UTC m=+0.077138331 container kill 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 1 04:58:14 localhost podman[312631]: Feb 1 04:58:14 localhost podman[312631]: 2026-02-01 09:58:14.78316763 +0000 UTC m=+0.084400326 container create 6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 1 04:58:14 localhost systemd[1]: Started libpod-conmon-6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b.scope. Feb 1 04:58:14 localhost systemd[1]: tmp-crun.n4HhGo.mount: Deactivated successfully. Feb 1 04:58:14 localhost podman[312631]: 2026-02-01 09:58:14.743987846 +0000 UTC m=+0.045220512 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:14 localhost systemd[1]: Started libcrun container. Feb 1 04:58:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4383ecd4dd19cf8e2c67a94d44ef1d65db9bfee3a02ca20cd31b89d8b95a13e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:14 localhost podman[312631]: 2026-02-01 09:58:14.872850779 +0000 UTC m=+0.174083465 container init 6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 1 04:58:14 localhost podman[312631]: 2026-02-01 09:58:14.881981572 +0000 UTC m=+0.183214268 container start 6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:14 localhost dnsmasq[312654]: started, version 2.85 cachesize 150 Feb 1 04:58:14 localhost dnsmasq[312654]: DNS service limited to local subnets Feb 1 04:58:14 localhost dnsmasq[312654]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:14 localhost dnsmasq[312654]: warning: no upstream servers configured Feb 1 04:58:14 localhost dnsmasq-dhcp[312654]: DHCP, static leases only on 10.100.255.240, lease time 1d Feb 1 04:58:14 localhost dnsmasq[312654]: read /var/lib/neutron/dhcp/4da937bf-f66e-48ce-bf66-f7d3d9f7bc52/addn_hosts - 0 addresses Feb 1 04:58:14 localhost dnsmasq-dhcp[312654]: read /var/lib/neutron/dhcp/4da937bf-f66e-48ce-bf66-f7d3d9f7bc52/host Feb 1 04:58:14 localhost dnsmasq-dhcp[312654]: read /var/lib/neutron/dhcp/4da937bf-f66e-48ce-bf66-f7d3d9f7bc52/opts Feb 1 04:58:14 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:14.948 158655 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 91f0b00b-52ea-4ae1-b321-59487fbf888e with type ""#033[00m Feb 1 04:58:14 localhost ovn_controller[152787]: 2026-02-01T09:58:14Z|00201|binding|INFO|Removing iface tapb401a566-f9 ovn-installed in OVS Feb 1 04:58:14 localhost ovn_controller[152787]: 2026-02-01T09:58:14Z|00202|binding|INFO|Removing lport b401a566-f92c-44aa-86ca-bb673a1a49df ovn-installed in OVS Feb 1 04:58:14 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:14.950 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-fd0f2c71-aa97-4b39-b751-c91f8ed96a20', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd0f2c71-aa97-4b39-b751-c91f8ed96a20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1a79666-54fd-413d-b574-80dec3e84f3c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b401a566-f92c-44aa-86ca-bb673a1a49df) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:14 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:14.951 158655 INFO neutron.agent.ovn.metadata.agent [-] Port b401a566-f92c-44aa-86ca-bb673a1a49df in datapath fd0f2c71-aa97-4b39-b751-c91f8ed96a20 unbound from our chassis#033[00m Feb 1 04:58:14 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:14.953 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd0f2c71-aa97-4b39-b751-c91f8ed96a20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:58:14 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:14.954 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3b873c-e56b-4324-a22f-d75f9d5b31c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:14 localhost nova_compute[274317]: 2026-02-01 09:58:14.988 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:14 localhost nova_compute[274317]: 2026-02-01 09:58:14.995 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e180 e180: 6 total, 6 up, 6 in Feb 1 04:58:15 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:15.052 259225 INFO neutron.agent.dhcp.agent [None req-66213892-68cc-46aa-a125-71ce87b58acf - - - - - -] DHCP configuration for ports {'5935b501-bf91-4a16-bdb0-4b0523f2e8eb'} is completed#033[00m Feb 1 04:58:15 localhost podman[312668]: 2026-02-01 09:58:15.059001726 +0000 UTC m=+0.052808807 container kill 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:58:15 localhost dnsmasq[312404]: exiting on receipt of SIGTERM Feb 1 04:58:15 localhost systemd[1]: libpod-4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf.scope: Deactivated successfully. Feb 1 04:58:15 localhost podman[312682]: 2026-02-01 09:58:15.118231491 +0000 UTC m=+0.047777311 container died 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:15 localhost podman[312682]: 2026-02-01 09:58:15.149645166 +0000 UTC m=+0.079190916 container cleanup 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 1 04:58:15 localhost systemd[1]: libpod-conmon-4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf.scope: Deactivated successfully. Feb 1 04:58:15 localhost podman[312686]: 2026-02-01 09:58:15.211560944 +0000 UTC m=+0.130623359 container remove 4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd0f2c71-aa97-4b39-b751-c91f8ed96a20, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:15 localhost nova_compute[274317]: 2026-02-01 09:58:15.222 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:15 localhost nova_compute[274317]: 2026-02-01 09:58:15.230 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:15 localhost kernel: device tapb401a566-f9 left promiscuous mode Feb 1 04:58:15 localhost nova_compute[274317]: 2026-02-01 09:58:15.242 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:15 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:15.258 259225 INFO neutron.agent.dhcp.agent [None req-dc7fbdb6-94ac-4033-8e77-88407fd3bf51 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:15 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:15.259 259225 INFO neutron.agent.dhcp.agent [None req-dc7fbdb6-94ac-4033-8e77-88407fd3bf51 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:15 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:15.310 2 INFO neutron.agent.securitygroups_rpc [None req-dea63ea4-98ed-4833-b1d5-beae69081804 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:15 localhost systemd[1]: var-lib-containers-storage-overlay-21c7157689accf0627dec7ac41e1a6a7bb79ef190b9ceae4d14af1a2c64b5d83-merged.mount: Deactivated successfully. Feb 1 04:58:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4342662ef3e8d0ee2dc1ebaffa6b34371e14b3d219158381d69898f07ea49adf-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:15 localhost systemd[1]: run-netns-qdhcp\x2dfd0f2c71\x2daa97\x2d4b39\x2db751\x2dc91f8ed96a20.mount: Deactivated successfully. Feb 1 04:58:16 localhost nova_compute[274317]: 2026-02-01 09:58:16.210 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:16 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:16.236 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 181 KiB/s wr, 94 op/s Feb 1 04:58:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e181 e181: 6 total, 6 up, 6 in Feb 1 04:58:17 localhost nova_compute[274317]: 2026-02-01 09:58:17.327 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:17.899 2 INFO neutron.agent.securitygroups_rpc [None req-38946330-c139-4cbe-adf2-c14c00d51ca6 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 169 KiB/s rd, 220 KiB/s wr, 152 op/s Feb 1 04:58:18 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e182 e182: 6 total, 6 up, 6 in Feb 1 04:58:19 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:19.414 2 INFO neutron.agent.securitygroups_rpc [None req-dc033b5b-d88f-40f8-9b27-1242de33844a d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:19 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:19.733 259225 INFO neutron.agent.linux.ip_lib [None req-ddf51fcc-26fa-4652-804e-7b0579ae6d59 - - - - - -] Device tap9a001166-81 cannot be used as it has no MAC address#033[00m Feb 1 04:58:19 localhost nova_compute[274317]: 2026-02-01 09:58:19.752 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:19 localhost kernel: device tap9a001166-81 entered promiscuous mode Feb 1 04:58:19 localhost NetworkManager[5972]: [1769939899.7603] manager: (tap9a001166-81): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Feb 1 04:58:19 localhost nova_compute[274317]: 2026-02-01 09:58:19.762 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:19 localhost ovn_controller[152787]: 2026-02-01T09:58:19Z|00203|binding|INFO|Claiming lport 9a001166-8198-4237-92de-4f0266ce26a0 for this chassis. Feb 1 04:58:19 localhost ovn_controller[152787]: 2026-02-01T09:58:19Z|00204|binding|INFO|9a001166-8198-4237-92de-4f0266ce26a0: Claiming unknown Feb 1 04:58:19 localhost systemd-udevd[312722]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:58:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:19.777 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-8f28b580-fb6e-4167-81da-20b98b3e9051', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f28b580-fb6e-4167-81da-20b98b3e9051', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fea4c3ac6fd14aee8b0de1bad5f8673a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=578abda3-3a27-44f6-a802-bbe6cde94e49, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9a001166-8198-4237-92de-4f0266ce26a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:19.778 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9a001166-8198-4237-92de-4f0266ce26a0 in datapath 8f28b580-fb6e-4167-81da-20b98b3e9051 bound to our chassis#033[00m Feb 1 04:58:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:19.779 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8f28b580-fb6e-4167-81da-20b98b3e9051 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:58:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:19.781 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[74e114ff-5e50-4375-ab90-84ab6aa872fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:19 localhost journal[224955]: ethtool ioctl error on tap9a001166-81: No such device Feb 1 04:58:19 localhost journal[224955]: ethtool ioctl error on tap9a001166-81: No such device Feb 1 04:58:19 localhost journal[224955]: ethtool ioctl error on tap9a001166-81: No such device Feb 1 04:58:19 localhost nova_compute[274317]: 2026-02-01 09:58:19.796 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:19 localhost ovn_controller[152787]: 2026-02-01T09:58:19Z|00205|binding|INFO|Setting lport 9a001166-8198-4237-92de-4f0266ce26a0 ovn-installed in OVS Feb 1 04:58:19 localhost ovn_controller[152787]: 2026-02-01T09:58:19Z|00206|binding|INFO|Setting lport 9a001166-8198-4237-92de-4f0266ce26a0 up in Southbound Feb 1 04:58:19 localhost nova_compute[274317]: 2026-02-01 09:58:19.800 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:19 localhost journal[224955]: ethtool ioctl error on tap9a001166-81: No such device Feb 1 04:58:19 localhost journal[224955]: ethtool ioctl error on tap9a001166-81: No such device Feb 1 04:58:19 localhost journal[224955]: ethtool ioctl error on tap9a001166-81: No such device Feb 1 04:58:19 localhost journal[224955]: ethtool ioctl error on tap9a001166-81: No such device Feb 1 04:58:19 localhost journal[224955]: ethtool ioctl error on tap9a001166-81: No such device Feb 1 04:58:19 localhost nova_compute[274317]: 2026-02-01 09:58:19.831 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:19 localhost nova_compute[274317]: 2026-02-01 09:58:19.855 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:20 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:20.027 2 INFO neutron.agent.securitygroups_rpc [None req-26640e7d-8ddf-4cc3-b0ea-945d98bbd76e d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 28 KiB/s wr, 52 op/s Feb 1 04:58:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e183 e183: 6 total, 6 up, 6 in Feb 1 04:58:20 localhost podman[312793]: Feb 1 04:58:20 localhost podman[312793]: 2026-02-01 09:58:20.585889607 +0000 UTC m=+0.064006354 container create fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8f28b580-fb6e-4167-81da-20b98b3e9051, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:20 localhost systemd[1]: Started libpod-conmon-fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1.scope. Feb 1 04:58:20 localhost podman[312793]: 2026-02-01 09:58:20.553991499 +0000 UTC m=+0.032108226 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:20 localhost systemd[1]: Started libcrun container. Feb 1 04:58:20 localhost systemd[1]: tmp-crun.TIL960.mount: Deactivated successfully. Feb 1 04:58:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa36a994553b881feddf38c260e7f34cbb97e21c52634976f025a88a1d4d79cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:20 localhost podman[312793]: 2026-02-01 09:58:20.676130073 +0000 UTC m=+0.154246810 container init fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8f28b580-fb6e-4167-81da-20b98b3e9051, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:58:20 localhost podman[312793]: 2026-02-01 09:58:20.689551629 +0000 UTC m=+0.167668376 container start fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8f28b580-fb6e-4167-81da-20b98b3e9051, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 1 04:58:20 localhost dnsmasq[312811]: started, version 2.85 cachesize 150 Feb 1 04:58:20 localhost dnsmasq[312811]: DNS service limited to local subnets Feb 1 04:58:20 localhost dnsmasq[312811]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:20 localhost dnsmasq[312811]: warning: no upstream servers configured Feb 1 04:58:20 localhost dnsmasq-dhcp[312811]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 1 04:58:20 localhost dnsmasq[312811]: read /var/lib/neutron/dhcp/8f28b580-fb6e-4167-81da-20b98b3e9051/addn_hosts - 0 addresses Feb 1 04:58:20 localhost dnsmasq-dhcp[312811]: read /var/lib/neutron/dhcp/8f28b580-fb6e-4167-81da-20b98b3e9051/host Feb 1 04:58:20 localhost dnsmasq-dhcp[312811]: read /var/lib/neutron/dhcp/8f28b580-fb6e-4167-81da-20b98b3e9051/opts Feb 1 04:58:20 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:20.892 259225 INFO neutron.agent.dhcp.agent [None req-0355b246-ad0c-469f-97c7-28507c1a8e5e - - - - - -] DHCP configuration for ports {'41c64030-4370-4e37-b8a1-94597c9c8e2f'} is completed#033[00m Feb 1 04:58:21 localhost ovn_controller[152787]: 2026-02-01T09:58:21Z|00207|binding|INFO|Removing iface tap9a001166-81 ovn-installed in OVS Feb 1 04:58:21 localhost ovn_controller[152787]: 2026-02-01T09:58:21Z|00208|binding|INFO|Removing lport 9a001166-8198-4237-92de-4f0266ce26a0 ovn-installed in OVS Feb 1 04:58:21 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:21.013 158655 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port dc7b660d-1162-4d4d-ae2d-e314c0d2e224 with type ""#033[00m Feb 1 04:58:21 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:21.015 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-8f28b580-fb6e-4167-81da-20b98b3e9051', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f28b580-fb6e-4167-81da-20b98b3e9051', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fea4c3ac6fd14aee8b0de1bad5f8673a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=578abda3-3a27-44f6-a802-bbe6cde94e49, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9a001166-8198-4237-92de-4f0266ce26a0) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:21 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:21.017 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9a001166-8198-4237-92de-4f0266ce26a0 in datapath 8f28b580-fb6e-4167-81da-20b98b3e9051 unbound from our chassis#033[00m Feb 1 04:58:21 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:21.019 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8f28b580-fb6e-4167-81da-20b98b3e9051 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:58:21 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:21.020 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[5dda09d2-e287-4111-b6e0-91fcae2ca37f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:21 localhost nova_compute[274317]: 2026-02-01 09:58:21.045 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:21 localhost dnsmasq[312811]: exiting on receipt of SIGTERM Feb 1 04:58:21 localhost podman[312827]: 2026-02-01 09:58:21.073985641 +0000 UTC m=+0.080710931 container kill fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8f28b580-fb6e-4167-81da-20b98b3e9051, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 04:58:21 localhost systemd[1]: libpod-fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1.scope: Deactivated successfully. Feb 1 04:58:21 localhost podman[312841]: 2026-02-01 09:58:21.137325424 +0000 UTC m=+0.053529850 container died fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8f28b580-fb6e-4167-81da-20b98b3e9051, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:21 localhost podman[312841]: 2026-02-01 09:58:21.168403597 +0000 UTC m=+0.084608023 container cleanup fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8f28b580-fb6e-4167-81da-20b98b3e9051, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:21 localhost systemd[1]: libpod-conmon-fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1.scope: Deactivated successfully. Feb 1 04:58:21 localhost nova_compute[274317]: 2026-02-01 09:58:21.212 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:21 localhost podman[312846]: 2026-02-01 09:58:21.216317521 +0000 UTC m=+0.122388534 container remove fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8f28b580-fb6e-4167-81da-20b98b3e9051, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:58:21 localhost nova_compute[274317]: 2026-02-01 09:58:21.226 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:21 localhost kernel: device tap9a001166-81 left promiscuous mode Feb 1 04:58:21 localhost nova_compute[274317]: 2026-02-01 09:58:21.237 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:21 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:21.252 259225 INFO neutron.agent.dhcp.agent [None req-70d9c5bb-c8f3-42c0-a492-aa912eb246c2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:21 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:21.252 259225 INFO neutron.agent.dhcp.agent [None req-70d9c5bb-c8f3-42c0-a492-aa912eb246c2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:21 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:21.277 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:58:21 Feb 1 04:58:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:58:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:58:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['manila_metadata', '.mgr', 'manila_data', 'backups', 'vms', 'volumes', 'images'] Feb 1 04:58:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:58:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:58:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:58:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:58:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:58:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:58:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006580482708682301 of space, bias 1.0, pg target 1.31609654173646 quantized to 32 (current 32) Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003255208333333333 quantized to 32 (current 32) Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.4071718546435884e-05 quantized to 32 (current 32) Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:58:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 7.088393355667225e-06 of space, bias 4.0, pg target 0.0056234587288293315 quantized to 16 (current 16) Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:58:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:58:21 localhost systemd[1]: var-lib-containers-storage-overlay-fa36a994553b881feddf38c260e7f34cbb97e21c52634976f025a88a1d4d79cc-merged.mount: Deactivated successfully. Feb 1 04:58:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc39355a2956ee74a90ca1008c91234f5246b5b112f8b8dca778b8fd8f1bfee1-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:21 localhost systemd[1]: run-netns-qdhcp\x2d8f28b580\x2dfb6e\x2d4167\x2d81da\x2d20b98b3e9051.mount: Deactivated successfully. Feb 1 04:58:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e184 e184: 6 total, 6 up, 6 in Feb 1 04:58:21 localhost nova_compute[274317]: 2026-02-01 09:58:21.761 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 31 KiB/s wr, 56 op/s Feb 1 04:58:22 localhost nova_compute[274317]: 2026-02-01 09:58:22.361 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:22 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:22.603 2 INFO neutron.agent.securitygroups_rpc [None req-4dc119ae-7dfd-4794-add0-4c162f41d887 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:22 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e185 e185: 6 total, 6 up, 6 in Feb 1 04:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:58:23 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:23.852 259225 INFO neutron.agent.linux.ip_lib [None req-207058d4-8934-477d-9997-5666c14de9b7 - - - - - -] Device tap9d56394b-75 cannot be used as it has no MAC address#033[00m Feb 1 04:58:23 localhost nova_compute[274317]: 2026-02-01 09:58:23.876 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:23 localhost kernel: device tap9d56394b-75 entered promiscuous mode Feb 1 04:58:23 localhost NetworkManager[5972]: [1769939903.8859] manager: (tap9d56394b-75): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Feb 1 04:58:23 localhost nova_compute[274317]: 2026-02-01 09:58:23.887 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:23 localhost systemd-udevd[312912]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:58:23 localhost ovn_controller[152787]: 2026-02-01T09:58:23Z|00209|binding|INFO|Claiming lport 9d56394b-750d-4167-8e00-5138f0e20ab4 for this chassis. Feb 1 04:58:23 localhost ovn_controller[152787]: 2026-02-01T09:58:23Z|00210|binding|INFO|9d56394b-750d-4167-8e00-5138f0e20ab4: Claiming unknown Feb 1 04:58:23 localhost podman[312877]: 2026-02-01 09:58:23.894398171 +0000 UTC m=+0.098322897 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:58:23 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:23.904 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-ddb7490a-2172-4022-90d8-32c9167c3083', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddb7490a-2172-4022-90d8-32c9167c3083', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fea4c3ac6fd14aee8b0de1bad5f8673a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08cdf106-e071-4811-8b04-e1f5131f8f49, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9d56394b-750d-4167-8e00-5138f0e20ab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:23 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:23.906 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9d56394b-750d-4167-8e00-5138f0e20ab4 in datapath ddb7490a-2172-4022-90d8-32c9167c3083 bound to our chassis#033[00m Feb 1 04:58:23 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:23.908 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ddb7490a-2172-4022-90d8-32c9167c3083 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:58:23 localhost podman[312877]: 2026-02-01 09:58:23.909569102 +0000 UTC m=+0.113493848 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:58:23 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:23.910 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a52351-28df-45ca-8dbc-8916dac014a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:23 localhost journal[224955]: ethtool ioctl error on tap9d56394b-75: No such device Feb 1 04:58:23 localhost ovn_controller[152787]: 2026-02-01T09:58:23Z|00211|binding|INFO|Setting lport 9d56394b-750d-4167-8e00-5138f0e20ab4 ovn-installed in OVS Feb 1 04:58:23 localhost journal[224955]: ethtool ioctl error on tap9d56394b-75: No such device Feb 1 04:58:23 localhost ovn_controller[152787]: 2026-02-01T09:58:23Z|00212|binding|INFO|Setting lport 9d56394b-750d-4167-8e00-5138f0e20ab4 up in Southbound Feb 1 04:58:23 localhost nova_compute[274317]: 2026-02-01 09:58:23.924 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:23 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:58:23 localhost journal[224955]: ethtool ioctl error on tap9d56394b-75: No such device Feb 1 04:58:23 localhost journal[224955]: ethtool ioctl error on tap9d56394b-75: No such device Feb 1 04:58:23 localhost journal[224955]: ethtool ioctl error on tap9d56394b-75: No such device Feb 1 04:58:23 localhost journal[224955]: ethtool ioctl error on tap9d56394b-75: No such device Feb 1 04:58:23 localhost journal[224955]: ethtool ioctl error on tap9d56394b-75: No such device Feb 1 04:58:23 localhost journal[224955]: ethtool ioctl error on tap9d56394b-75: No such device Feb 1 04:58:23 localhost nova_compute[274317]: 2026-02-01 09:58:23.957 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:23 localhost nova_compute[274317]: 2026-02-01 09:58:23.990 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:23 localhost podman[312876]: 2026-02-01 09:58:23.994040979 +0000 UTC m=+0.198742369 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 04:58:24 localhost podman[312876]: 2026-02-01 09:58:24.058785195 +0000 UTC m=+0.263486585 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 04:58:24 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:58:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:58:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:58:24 localhost podman[312963]: 2026-02-01 09:58:24.207824403 +0000 UTC m=+0.107148251 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, release=1769056855, io.openshift.expose-services=) Feb 1 04:58:24 localhost podman[312963]: 2026-02-01 09:58:24.248522835 +0000 UTC m=+0.147846663 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, build-date=2026-01-22T05:09:47Z, release=1769056855, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 04:58:24 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:58:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 9.1 KiB/s wr, 44 op/s Feb 1 04:58:24 localhost podman[312986]: 2026-02-01 09:58:24.317705848 +0000 UTC m=+0.097223453 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:58:24 localhost podman[312986]: 2026-02-01 09:58:24.350638698 +0000 UTC m=+0.130156273 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 1 04:58:24 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:58:24 localhost podman[313040]: Feb 1 04:58:24 localhost podman[313040]: 2026-02-01 09:58:24.749342212 +0000 UTC m=+0.087112810 container create 835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddb7490a-2172-4022-90d8-32c9167c3083, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:24 localhost systemd[1]: tmp-crun.tdg12E.mount: Deactivated successfully. Feb 1 04:58:24 localhost systemd[1]: Started libpod-conmon-835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c.scope. Feb 1 04:58:24 localhost systemd[1]: Started libcrun container. Feb 1 04:58:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e186 e186: 6 total, 6 up, 6 in Feb 1 04:58:24 localhost podman[313040]: 2026-02-01 09:58:24.706541816 +0000 UTC m=+0.044312474 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e196fdec99f1e2f5b5d327d1b1bde3b88b31de40ec25a82cda6557ff2bf9c71/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:24 localhost podman[313040]: 2026-02-01 09:58:24.820359143 +0000 UTC m=+0.158129771 container init 835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddb7490a-2172-4022-90d8-32c9167c3083, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:58:24 localhost podman[313040]: 2026-02-01 09:58:24.830876318 +0000 UTC m=+0.168646916 container start 835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddb7490a-2172-4022-90d8-32c9167c3083, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:58:24 localhost dnsmasq[313058]: started, version 2.85 cachesize 150 Feb 1 04:58:24 localhost dnsmasq[313058]: DNS service limited to local subnets Feb 1 04:58:24 localhost dnsmasq[313058]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:24 localhost dnsmasq[313058]: warning: no upstream servers configured Feb 1 04:58:24 localhost dnsmasq-dhcp[313058]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 1 04:58:24 localhost dnsmasq[313058]: read /var/lib/neutron/dhcp/ddb7490a-2172-4022-90d8-32c9167c3083/addn_hosts - 0 addresses Feb 1 04:58:24 localhost dnsmasq-dhcp[313058]: read /var/lib/neutron/dhcp/ddb7490a-2172-4022-90d8-32c9167c3083/host Feb 1 04:58:24 localhost dnsmasq-dhcp[313058]: read /var/lib/neutron/dhcp/ddb7490a-2172-4022-90d8-32c9167c3083/opts Feb 1 04:58:24 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:24.892 259225 INFO neutron.agent.dhcp.agent [None req-207058d4-8934-477d-9997-5666c14de9b7 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:58:23Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d1696f58-ce55-430b-aa17-07e03a47e863, ip_allocation=immediate, mac_address=fa:16:3e:1a:26:a2, name=tempest-PortsIpV6TestJSON-760066674, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:58:21Z, description=, dns_domain=, id=ddb7490a-2172-4022-90d8-32c9167c3083, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1383902506, port_security_enabled=True, project_id=fea4c3ac6fd14aee8b0de1bad5f8673a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14606, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2478, status=ACTIVE, subnets=['2f006930-26ca-4f51-9fb8-930c6e961e50'], tags=[], tenant_id=fea4c3ac6fd14aee8b0de1bad5f8673a, updated_at=2026-02-01T09:58:23Z, vlan_transparent=None, network_id=ddb7490a-2172-4022-90d8-32c9167c3083, port_security_enabled=True, project_id=fea4c3ac6fd14aee8b0de1bad5f8673a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2486, status=DOWN, tags=[], tenant_id=fea4c3ac6fd14aee8b0de1bad5f8673a, updated_at=2026-02-01T09:58:23Z on network ddb7490a-2172-4022-90d8-32c9167c3083#033[00m Feb 1 04:58:25 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:25.043 259225 INFO neutron.agent.dhcp.agent [None req-e62f7f83-c786-4ac0-9dcb-0985735b3227 - - - - - -] DHCP configuration for ports {'691e6e5a-5b53-4cda-aa63-ca7824ce5dca'} is completed#033[00m Feb 1 04:58:25 localhost dnsmasq[313058]: read /var/lib/neutron/dhcp/ddb7490a-2172-4022-90d8-32c9167c3083/addn_hosts - 1 addresses Feb 1 04:58:25 localhost dnsmasq-dhcp[313058]: read /var/lib/neutron/dhcp/ddb7490a-2172-4022-90d8-32c9167c3083/host Feb 1 04:58:25 localhost dnsmasq-dhcp[313058]: read /var/lib/neutron/dhcp/ddb7490a-2172-4022-90d8-32c9167c3083/opts Feb 1 04:58:25 localhost podman[313077]: 2026-02-01 09:58:25.085825208 +0000 UTC m=+0.061966491 container kill 835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddb7490a-2172-4022-90d8-32c9167c3083, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:58:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:25 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:25.265 259225 INFO neutron.agent.dhcp.agent [None req-24c82a0f-4fe7-4088-8621-1cd89bbb5b2a - - - - - -] DHCP configuration for ports {'d1696f58-ce55-430b-aa17-07e03a47e863'} is completed#033[00m Feb 1 04:58:25 localhost dnsmasq[313058]: exiting on receipt of SIGTERM Feb 1 04:58:25 localhost podman[313114]: 2026-02-01 09:58:25.461485038 +0000 UTC m=+0.057111081 container kill 835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddb7490a-2172-4022-90d8-32c9167c3083, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 1 04:58:25 localhost systemd[1]: libpod-835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c.scope: Deactivated successfully. Feb 1 04:58:25 localhost ovn_controller[152787]: 2026-02-01T09:58:25Z|00213|binding|INFO|Removing iface tap9d56394b-75 ovn-installed in OVS Feb 1 04:58:25 localhost ovn_controller[152787]: 2026-02-01T09:58:25Z|00214|binding|INFO|Removing lport 9d56394b-750d-4167-8e00-5138f0e20ab4 ovn-installed in OVS Feb 1 04:58:25 localhost nova_compute[274317]: 2026-02-01 09:58:25.482 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:25.483 158655 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 34732b3b-eb6e-4f20-9a97-5a558283bc2d with type ""#033[00m Feb 1 04:58:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:25.484 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-ddb7490a-2172-4022-90d8-32c9167c3083', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddb7490a-2172-4022-90d8-32c9167c3083', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fea4c3ac6fd14aee8b0de1bad5f8673a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08cdf106-e071-4811-8b04-e1f5131f8f49, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9d56394b-750d-4167-8e00-5138f0e20ab4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:25.485 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 9d56394b-750d-4167-8e00-5138f0e20ab4 in datapath ddb7490a-2172-4022-90d8-32c9167c3083 unbound from our chassis#033[00m Feb 1 04:58:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:25.486 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ddb7490a-2172-4022-90d8-32c9167c3083 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:58:25 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:25.487 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3cf813-d8df-408d-971c-64c1f0c8ed2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:25 localhost nova_compute[274317]: 2026-02-01 09:58:25.489 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:25 localhost podman[313127]: 2026-02-01 09:58:25.531857979 +0000 UTC m=+0.058993809 container died 835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddb7490a-2172-4022-90d8-32c9167c3083, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 1 04:58:25 localhost podman[313127]: 2026-02-01 09:58:25.559910448 +0000 UTC m=+0.087046268 container cleanup 835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddb7490a-2172-4022-90d8-32c9167c3083, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:25 localhost systemd[1]: libpod-conmon-835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c.scope: Deactivated successfully. Feb 1 04:58:25 localhost podman[313129]: 2026-02-01 09:58:25.613187719 +0000 UTC m=+0.135068347 container remove 835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddb7490a-2172-4022-90d8-32c9167c3083, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:58:25 localhost nova_compute[274317]: 2026-02-01 09:58:25.625 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:25 localhost kernel: device tap9d56394b-75 left promiscuous mode Feb 1 04:58:25 localhost nova_compute[274317]: 2026-02-01 09:58:25.638 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:25 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:25.656 259225 INFO neutron.agent.dhcp.agent [None req-4fcb9f02-55d7-41dd-bc48-a2ed016ae948 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:25 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:25.657 259225 INFO neutron.agent.dhcp.agent [None req-4fcb9f02-55d7-41dd-bc48-a2ed016ae948 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:25 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:25.708 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:25 localhost systemd[1]: var-lib-containers-storage-overlay-6e196fdec99f1e2f5b5d327d1b1bde3b88b31de40ec25a82cda6557ff2bf9c71-merged.mount: Deactivated successfully. Feb 1 04:58:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-835b8ad501100b60648f14b459b7768157807b06b432edf5c549cfac1776687c-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:25 localhost systemd[1]: run-netns-qdhcp\x2dddb7490a\x2d2172\x2d4022\x2d90d8\x2d32c9167c3083.mount: Deactivated successfully. Feb 1 04:58:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e187 e187: 6 total, 6 up, 6 in Feb 1 04:58:26 localhost nova_compute[274317]: 2026-02-01 09:58:26.108 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:26 localhost nova_compute[274317]: 2026-02-01 09:58:26.215 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s Feb 1 04:58:26 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:26.404 2 INFO neutron.agent.securitygroups_rpc [None req-b01da263-4ebd-4a0d-81ec-4ece56b6a941 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e188 e188: 6 total, 6 up, 6 in Feb 1 04:58:27 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:27.361 2 INFO neutron.agent.securitygroups_rpc [None req-6b687509-4ccb-4206-8762-0407780a338c d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:27 localhost nova_compute[274317]: 2026-02-01 09:58:27.395 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:27 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:27.459 259225 INFO neutron.agent.linux.ip_lib [None req-3fa6789c-9961-4c6f-b8eb-8c15dedeee1f - - - - - -] Device tapbfd160dd-fa cannot be used as it has no MAC address#033[00m Feb 1 04:58:27 localhost nova_compute[274317]: 2026-02-01 09:58:27.481 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:27 localhost kernel: device tapbfd160dd-fa entered promiscuous mode Feb 1 04:58:27 localhost NetworkManager[5972]: [1769939907.4910] manager: (tapbfd160dd-fa): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Feb 1 04:58:27 localhost nova_compute[274317]: 2026-02-01 09:58:27.491 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:27 localhost ovn_controller[152787]: 2026-02-01T09:58:27Z|00215|binding|INFO|Claiming lport bfd160dd-fa98-4ea9-815c-a97263cc82ea for this chassis. Feb 1 04:58:27 localhost ovn_controller[152787]: 2026-02-01T09:58:27Z|00216|binding|INFO|bfd160dd-fa98-4ea9-815c-a97263cc82ea: Claiming unknown Feb 1 04:58:27 localhost systemd-udevd[313168]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:58:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:27.505 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae16595-a7d2-468e-9eb0-cb266a7101cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=bfd160dd-fa98-4ea9-815c-a97263cc82ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:27.508 158655 INFO neutron.agent.ovn.metadata.agent [-] Port bfd160dd-fa98-4ea9-815c-a97263cc82ea in datapath 42a0a17b-be28-4b0f-b80f-055ba2c3d245 bound to our chassis#033[00m Feb 1 04:58:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:27.512 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 42a0a17b-be28-4b0f-b80f-055ba2c3d245 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:58:27 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:27.513 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[55ca846c-1be3-4baf-9bf2-63f0b3684971]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:27 localhost nova_compute[274317]: 2026-02-01 09:58:27.537 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:27 localhost ovn_controller[152787]: 2026-02-01T09:58:27Z|00217|binding|INFO|Setting lport bfd160dd-fa98-4ea9-815c-a97263cc82ea ovn-installed in OVS Feb 1 04:58:27 localhost ovn_controller[152787]: 2026-02-01T09:58:27Z|00218|binding|INFO|Setting lport bfd160dd-fa98-4ea9-815c-a97263cc82ea up in Southbound Feb 1 04:58:27 localhost nova_compute[274317]: 2026-02-01 09:58:27.545 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:27 localhost nova_compute[274317]: 2026-02-01 09:58:27.574 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:27 localhost nova_compute[274317]: 2026-02-01 09:58:27.601 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:27 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:27.858 2 INFO neutron.agent.securitygroups_rpc [None req-8578e386-0bfd-4f36-a9ea-ecec11376e1e d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 6.3 KiB/s wr, 71 op/s Feb 1 04:58:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e189 e189: 6 total, 6 up, 6 in Feb 1 04:58:28 localhost podman[313223]: Feb 1 04:58:28 localhost podman[313223]: 2026-02-01 09:58:28.463180875 +0000 UTC m=+0.100549627 container create e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 04:58:28 localhost systemd[1]: Started libpod-conmon-e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103.scope. Feb 1 04:58:28 localhost podman[313223]: 2026-02-01 09:58:28.410325558 +0000 UTC m=+0.047694339 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:28 localhost systemd[1]: Started libcrun container. Feb 1 04:58:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785448961ff9a090dea5bf4a331b82550a367213dfd8e1351fa4cefd516f8cf1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:28 localhost podman[313223]: 2026-02-01 09:58:28.537791568 +0000 UTC m=+0.175160319 container init e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Feb 1 04:58:28 localhost podman[313223]: 2026-02-01 09:58:28.549253993 +0000 UTC m=+0.186622744 container start e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:28 localhost dnsmasq[313241]: started, version 2.85 cachesize 150 Feb 1 04:58:28 localhost dnsmasq[313241]: DNS service limited to local subnets Feb 1 04:58:28 localhost dnsmasq[313241]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:28 localhost dnsmasq[313241]: warning: no upstream servers configured Feb 1 04:58:28 localhost dnsmasq-dhcp[313241]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:58:28 localhost dnsmasq[313241]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 0 addresses Feb 1 04:58:28 localhost dnsmasq-dhcp[313241]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:28 localhost dnsmasq-dhcp[313241]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:28 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:28.755 259225 INFO neutron.agent.dhcp.agent [None req-84d5578d-da31-443a-afd5-02dffc3dd952 - - - - - -] DHCP configuration for ports {'ff54b909-b3b9-4669-8851-459606a86b19', 'f5db53be-fb30-4c27-aabf-1c052ca12256'} is completed#033[00m Feb 1 04:58:28 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:28.793 2 INFO neutron.agent.securitygroups_rpc [None req-c10ec1b0-3019-415a-9281-06c26de3609b d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:29 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:29.642 2 INFO neutron.agent.securitygroups_rpc [None req-53541160-40c7-461f-a788-c0d63f1c152b d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:30 localhost podman[236852]: time="2026-02-01T09:58:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:58:30 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 04:58:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e, vol_name:cephfs) < "" Feb 1 04:58:30 localhost podman[236852]: @ - - [01/Feb/2026:09:58:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160832 "" "Go-http-client/1.1" Feb 1 04:58:30 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e/.meta.tmp' Feb 1 04:58:30 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e/.meta.tmp' to config b'/volumes/_nogroup/a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e/.meta' Feb 1 04:58:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e, vol_name:cephfs) < "" Feb 1 04:58:30 localhost podman[236852]: @ - - [01/Feb/2026:09:58:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19710 "" "Go-http-client/1.1" Feb 1 04:58:30 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e", "format": "json"}]: dispatch Feb 1 04:58:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e, vol_name:cephfs) < "" Feb 1 04:58:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e, vol_name:cephfs) < "" Feb 1 04:58:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 6.4 KiB/s wr, 72 op/s Feb 1 04:58:30 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:30.847 2 INFO neutron.agent.securitygroups_rpc [None req-7e2c05f6-614a-4970-b06a-02bbc809b5c5 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:31 localhost nova_compute[274317]: 2026-02-01 09:58:31.216 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:31 localhost openstack_network_exporter[239388]: ERROR 09:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:58:31 localhost openstack_network_exporter[239388]: Feb 1 04:58:31 localhost openstack_network_exporter[239388]: ERROR 09:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:58:31 localhost openstack_network_exporter[239388]: Feb 1 04:58:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e190 e190: 6 total, 6 up, 6 in Feb 1 04:58:32 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:32.250 2 INFO neutron.agent.securitygroups_rpc [None req-29619353-dbda-49ae-a09a-12a5be7ce5b3 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['179c1cf2-2f2b-4c28-9577-447c415ef292']#033[00m Feb 1 04:58:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 5.8 KiB/s wr, 66 op/s Feb 1 04:58:32 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:32.313 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:58:30Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=06e8676e-e136-4f95-a1cd-29d3bab497ba, ip_allocation=immediate, mac_address=fa:16:3e:91:24:70, name=tempest-PortsTestJSON-495577314, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:57:25Z, description=, dns_domain=, id=42a0a17b-be28-4b0f-b80f-055ba2c3d245, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1335101638, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39008, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2183, status=ACTIVE, subnets=['50402cd1-8e08-4101-9563-d54c0a29610f'], tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:26Z, vlan_transparent=None, network_id=42a0a17b-be28-4b0f-b80f-055ba2c3d245, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['179c1cf2-2f2b-4c28-9577-447c415ef292'], standard_attr_id=2516, status=DOWN, tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:31Z on network 42a0a17b-be28-4b0f-b80f-055ba2c3d245#033[00m Feb 1 04:58:32 localhost nova_compute[274317]: 2026-02-01 09:58:32.441 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:32 localhost dnsmasq[313241]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 1 addresses Feb 1 04:58:32 localhost podman[313259]: 2026-02-01 09:58:32.624412142 +0000 UTC m=+0.059899037 container kill e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 04:58:32 localhost dnsmasq-dhcp[313241]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:32 localhost dnsmasq-dhcp[313241]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:32 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:32.964 259225 INFO neutron.agent.dhcp.agent [None req-2ba2f90c-3129-4257-91e0-1db79c5c1b4c - - - - - -] DHCP configuration for ports {'06e8676e-e136-4f95-a1cd-29d3bab497ba'} is completed#033[00m Feb 1 04:58:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:58:33 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1904111334' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:58:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "48a610c1-90ab-40fc-9c1e-287cd3c7e703", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 04:58:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:33 localhost podman[313294]: 2026-02-01 09:58:33.781277137 +0000 UTC m=+0.065576772 container kill e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:58:33 localhost dnsmasq[313241]: exiting on receipt of SIGTERM Feb 1 04:58:33 localhost systemd[1]: libpod-e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103.scope: Deactivated successfully. Feb 1 04:58:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703/.meta.tmp' Feb 1 04:58:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703/.meta.tmp' to config b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703/.meta' Feb 1 04:58:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "48a610c1-90ab-40fc-9c1e-287cd3c7e703", "format": "json"}]: dispatch Feb 1 04:58:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:33 localhost podman[313313]: 2026-02-01 09:58:33.878441228 +0000 UTC m=+0.067831692 container died e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:58:33 localhost systemd[1]: var-lib-containers-storage-overlay-785448961ff9a090dea5bf4a331b82550a367213dfd8e1351fa4cefd516f8cf1-merged.mount: Deactivated successfully. Feb 1 04:58:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:33 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:33.928 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:33 localhost nova_compute[274317]: 2026-02-01 09:58:33.928 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:33 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:33.931 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:58:33 localhost podman[313313]: 2026-02-01 09:58:33.979931963 +0000 UTC m=+0.169322377 container remove e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 04:58:33 localhost systemd[1]: libpod-conmon-e831fbd361ac11f8dfda1a2661c1ad8d5a36f458a1996a9a622e64e3526fa103.scope: Deactivated successfully. Feb 1 04:58:33 localhost podman[313307]: 2026-02-01 09:58:33.993416461 +0000 UTC m=+0.197217191 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 1 04:58:34 localhost podman[313307]: 2026-02-01 09:58:34.031560273 +0000 UTC m=+0.235361003 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 1 04:58:34 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:58:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 225 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 8.5 KiB/s wr, 89 op/s Feb 1 04:58:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:58:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2894475016' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:58:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:58:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2894475016' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:58:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:34.778 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:f7:71 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae16595-a7d2-468e-9eb0-cb266a7101cb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ff54b909-b3b9-4669-8851-459606a86b19) old=Port_Binding(mac=['fa:16:3e:60:f7:71 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:34.781 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ff54b909-b3b9-4669-8851-459606a86b19 in datapath 42a0a17b-be28-4b0f-b80f-055ba2c3d245 updated#033[00m Feb 1 04:58:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:34.785 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 428ddfe1-cc5b-46ff-b105-7d44dab1a252 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:58:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:34.785 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 42a0a17b-be28-4b0f-b80f-055ba2c3d245, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:58:34 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:34.786 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[447eab43-bbf6-4582-902e-d11c322e86bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:58:35 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/741059174' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:58:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:35 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:35.409 2 INFO neutron.agent.securitygroups_rpc [None req-dcb1a11b-934a-4ada-b696-4f65471e82a7 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e191 e191: 6 total, 6 up, 6 in Feb 1 04:58:35 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:35.866 2 INFO neutron.agent.securitygroups_rpc [None req-76985e45-05b5-4e3d-9a6f-ec3725c3f7a9 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['179c1cf2-2f2b-4c28-9577-447c415ef292', 'c3da8dd5-026e-4efc-b2ae-7f08a7679dbe']#033[00m Feb 1 04:58:35 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:35.874 2 INFO neutron.agent.securitygroups_rpc [None req-f5224dd1-77ce-42ca-881a-3d11d7bd93a9 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:36 localhost nova_compute[274317]: 2026-02-01 09:58:36.219 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:36 localhost podman[313398]: Feb 1 04:58:36 localhost podman[313398]: 2026-02-01 09:58:36.234138348 +0000 UTC m=+0.117751500 container create 2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 1 04:58:36 localhost systemd[1]: Started libpod-conmon-2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565.scope. Feb 1 04:58:36 localhost podman[313398]: 2026-02-01 09:58:36.18798453 +0000 UTC m=+0.071597682 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:36 localhost systemd[1]: Started libcrun container. Feb 1 04:58:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e3dd6208f5f8f9cad46217d918b863e3df3d372eef86fd21e9c103ee3ff3eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 225 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 3.6 KiB/s wr, 35 op/s Feb 1 04:58:36 localhost podman[313398]: 2026-02-01 09:58:36.311223221 +0000 UTC m=+0.194836373 container init 2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:36 localhost podman[313398]: 2026-02-01 09:58:36.320908873 +0000 UTC m=+0.204522025 container start 2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:36 localhost dnsmasq[313416]: started, version 2.85 cachesize 150 Feb 1 04:58:36 localhost dnsmasq[313416]: DNS service limited to local subnets Feb 1 04:58:36 localhost dnsmasq[313416]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:36 localhost dnsmasq[313416]: warning: no upstream servers configured Feb 1 04:58:36 localhost dnsmasq-dhcp[313416]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:58:36 localhost dnsmasq-dhcp[313416]: DHCP, static leases only on 10.100.0.16, lease time 1d Feb 1 04:58:36 localhost dnsmasq[313416]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 1 addresses Feb 1 04:58:36 localhost dnsmasq-dhcp[313416]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:36 localhost dnsmasq-dhcp[313416]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:36 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:36.384 259225 INFO neutron.agent.dhcp.agent [None req-2ebfb342-f83f-470f-b395-6ea245cc524a - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:58:30Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=06e8676e-e136-4f95-a1cd-29d3bab497ba, ip_allocation=immediate, mac_address=fa:16:3e:91:24:70, name=tempest-PortsTestJSON-1566818112, network_id=42a0a17b-be28-4b0f-b80f-055ba2c3d245, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['c3da8dd5-026e-4efc-b2ae-7f08a7679dbe'], standard_attr_id=2516, status=DOWN, tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:35Z on network 42a0a17b-be28-4b0f-b80f-055ba2c3d245#033[00m Feb 1 04:58:36 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:36.387 259225 INFO oslo.privsep.daemon [None req-2ebfb342-f83f-470f-b395-6ea245cc524a - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmpzfw4hr0f/privsep.sock']#033[00m Feb 1 04:58:36 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:36.703 259225 INFO neutron.agent.dhcp.agent [None req-3d7f0e2f-d2b7-403f-a8da-cc41e0e59b4b - - - - - -] DHCP configuration for ports {'f5db53be-fb30-4c27-aabf-1c052ca12256', 'ff54b909-b3b9-4669-8851-459606a86b19', '06e8676e-e136-4f95-a1cd-29d3bab497ba', 'bfd160dd-fa98-4ea9-815c-a97263cc82ea'} is completed#033[00m Feb 1 04:58:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e192 e192: 6 total, 6 up, 6 in Feb 1 04:58:36 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:36.940 2 INFO neutron.agent.securitygroups_rpc [None req-1e2fb4b2-ba87-4ef5-8fe2-a6ebe885c08a 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['c3da8dd5-026e-4efc-b2ae-7f08a7679dbe']#033[00m Feb 1 04:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:58:37 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:37.058 259225 INFO oslo.privsep.daemon [None req-2ebfb342-f83f-470f-b395-6ea245cc524a - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Feb 1 04:58:37 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:36.937 313421 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 1 04:58:37 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:36.942 313421 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 1 04:58:37 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:36.945 313421 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Feb 1 04:58:37 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:36.945 313421 INFO oslo.privsep.daemon [-] privsep daemon running as pid 313421#033[00m Feb 1 04:58:37 localhost podman[313422]: 2026-02-01 09:58:37.122564036 +0000 UTC m=+0.085075322 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:58:37 localhost podman[313422]: 2026-02-01 09:58:37.156017069 +0000 UTC m=+0.118528315 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 04:58:37 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "48a610c1-90ab-40fc-9c1e-287cd3c7e703", "snap_name": "c067672e-1db0-4df2-a2d8-4c39c3271393", "format": "json"}]: dispatch Feb 1 04:58:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c067672e-1db0-4df2-a2d8-4c39c3271393, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:37 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:58:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c067672e-1db0-4df2-a2d8-4c39c3271393, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:37 localhost dnsmasq-dhcp[313416]: DHCPRELEASE(tapbfd160dd-fa) 10.100.0.8 fa:16:3e:91:24:70 Feb 1 04:58:37 localhost nova_compute[274317]: 2026-02-01 09:58:37.473 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:37 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:37.595 2 INFO neutron.agent.securitygroups_rpc [None req-9cffbc9d-d1cb-4c4b-934e-47effaf28296 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:37 localhost dnsmasq[313416]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 1 addresses Feb 1 04:58:37 localhost dnsmasq-dhcp[313416]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:37 localhost dnsmasq-dhcp[313416]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:37 localhost podman[313465]: 2026-02-01 09:58:37.809099573 +0000 UTC m=+0.061332573 container kill 2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:58:37 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:37.933 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 04:58:38 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:38.026 259225 INFO neutron.agent.dhcp.agent [None req-7dd87d10-4d78-49fa-8ace-7f351efa19b1 - - - - - -] DHCP configuration for ports {'06e8676e-e136-4f95-a1cd-29d3bab497ba'} is completed#033[00m Feb 1 04:58:38 localhost dnsmasq[313416]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 0 addresses Feb 1 04:58:38 localhost dnsmasq-dhcp[313416]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:38 localhost dnsmasq-dhcp[313416]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:38 localhost podman[313502]: 2026-02-01 09:58:38.186478864 +0000 UTC m=+0.056003536 container kill 2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:38 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:38.217 2 INFO neutron.agent.securitygroups_rpc [None req-901cd980-7a05-470c-b831-e021c0b0d3ee d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 225 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 15 KiB/s wr, 75 op/s Feb 1 04:58:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e193 e193: 6 total, 6 up, 6 in Feb 1 04:58:39 localhost dnsmasq[313416]: exiting on receipt of SIGTERM Feb 1 04:58:39 localhost systemd[1]: libpod-2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565.scope: Deactivated successfully. Feb 1 04:58:39 localhost podman[313575]: 2026-02-01 09:58:39.150354123 +0000 UTC m=+0.081261103 container kill 2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true) Feb 1 04:58:39 localhost podman[313590]: 2026-02-01 09:58:39.216189835 +0000 UTC m=+0.054393486 container died 2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127) Feb 1 04:58:39 localhost podman[313590]: 2026-02-01 09:58:39.299313486 +0000 UTC m=+0.137517137 container cleanup 2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:58:39 localhost systemd[1]: libpod-conmon-2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565.scope: Deactivated successfully. Feb 1 04:58:39 localhost podman[313597]: 2026-02-01 09:58:39.327661499 +0000 UTC m=+0.150244313 container remove 2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127) Feb 1 04:58:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:58:39 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:58:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:58:39 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:58:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e194 e194: 6 total, 6 up, 6 in Feb 1 04:58:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:58:39 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 8462a3c3-cad9-48f1-8d30-4b069df0223a (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:58:39 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 8462a3c3-cad9-48f1-8d30-4b069df0223a (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:58:39 localhost ceph-mgr[278126]: [progress INFO root] Completed event 8462a3c3-cad9-48f1-8d30-4b069df0223a (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:58:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:58:39 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:58:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:40.020 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:f7:71 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae16595-a7d2-468e-9eb0-cb266a7101cb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ff54b909-b3b9-4669-8851-459606a86b19) old=Port_Binding(mac=['fa:16:3e:60:f7:71 10.100.0.18 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:40.024 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ff54b909-b3b9-4669-8851-459606a86b19 in datapath 42a0a17b-be28-4b0f-b80f-055ba2c3d245 updated#033[00m Feb 1 04:58:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:40.027 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 428ddfe1-cc5b-46ff-b105-7d44dab1a252 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:58:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:40.027 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 42a0a17b-be28-4b0f-b80f-055ba2c3d245, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:58:40 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:40.029 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4cd69c-e7e5-4874-8091-e54252e324ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:40 localhost systemd[1]: var-lib-containers-storage-overlay-37e3dd6208f5f8f9cad46217d918b863e3df3d372eef86fd21e9c103ee3ff3eb-merged.mount: Deactivated successfully. Feb 1 04:58:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ad4b9b0d0bcfb570f406f533c36cc22a71695f544a8daf1ff663d07e93fe565-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 225 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 16 KiB/s wr, 48 op/s Feb 1 04:58:40 localhost podman[313716]: Feb 1 04:58:40 localhost podman[313716]: 2026-02-01 09:58:40.349395562 +0000 UTC m=+0.095041223 container create c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 04:58:40 localhost systemd[1]: Started libpod-conmon-c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc.scope. Feb 1 04:58:40 localhost podman[313716]: 2026-02-01 09:58:40.304335088 +0000 UTC m=+0.049980779 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:40 localhost systemd[1]: Started libcrun container. Feb 1 04:58:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925dea6d8969dbc30afe3d38ec9b67cf20165bdd875a10c16a2ec9974bdd03b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:40 localhost podman[313716]: 2026-02-01 09:58:40.424383149 +0000 UTC m=+0.170028810 container init c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 1 04:58:40 localhost podman[313716]: 2026-02-01 09:58:40.434566636 +0000 UTC m=+0.180212297 container start c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 1 04:58:40 localhost dnsmasq[313734]: started, version 2.85 cachesize 150 Feb 1 04:58:40 localhost dnsmasq[313734]: DNS service limited to local subnets Feb 1 04:58:40 localhost dnsmasq[313734]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:40 localhost dnsmasq[313734]: warning: no upstream servers configured Feb 1 04:58:40 localhost dnsmasq-dhcp[313734]: DHCP, static leases only on 10.100.0.16, lease time 1d Feb 1 04:58:40 localhost dnsmasq[313734]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 0 addresses Feb 1 04:58:40 localhost dnsmasq-dhcp[313734]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:40 localhost dnsmasq-dhcp[313734]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:40 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:40.741 259225 INFO neutron.agent.dhcp.agent [None req-998dab09-fea8-4fa3-8c43-7d40e440673a - - - - - -] DHCP configuration for ports {'bfd160dd-fa98-4ea9-815c-a97263cc82ea', 'ff54b909-b3b9-4669-8851-459606a86b19', 'f5db53be-fb30-4c27-aabf-1c052ca12256'} is completed#033[00m Feb 1 04:58:40 localhost dnsmasq[313734]: exiting on receipt of SIGTERM Feb 1 04:58:40 localhost podman[313752]: 2026-02-01 09:58:40.808525512 +0000 UTC m=+0.060928521 container kill c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 1 04:58:40 localhost systemd[1]: libpod-c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc.scope: Deactivated successfully. Feb 1 04:58:40 localhost podman[313765]: 2026-02-01 09:58:40.881437104 +0000 UTC m=+0.054879082 container died c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:58:40 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:58:40 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:58:40 localhost podman[313765]: 2026-02-01 09:58:40.91276736 +0000 UTC m=+0.086209298 container cleanup c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:58:40 localhost systemd[1]: libpod-conmon-c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc.scope: Deactivated successfully. Feb 1 04:58:40 localhost podman[313766]: 2026-02-01 09:58:40.961152758 +0000 UTC m=+0.128870208 container remove c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:58:41 localhost systemd[1]: var-lib-containers-storage-overlay-925dea6d8969dbc30afe3d38ec9b67cf20165bdd875a10c16a2ec9974bdd03b4-merged.mount: Deactivated successfully. Feb 1 04:58:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5a4095967a1c724908b8e5409271cf4420c247bf97d1d8a345d1279dd9805fc-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:41 localhost nova_compute[274317]: 2026-02-01 09:58:41.221 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:41 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:41.275 2 INFO neutron.agent.securitygroups_rpc [None req-74812456-6c53-4459-bad0-4aac2b18e077 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:41 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:58:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:58:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:41.775 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:58:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:41.776 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:58:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:41.776 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:58:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e195 e195: 6 total, 6 up, 6 in Feb 1 04:58:42 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:42.032 2 INFO neutron.agent.securitygroups_rpc [None req-c40bbf53-5b24-4f24-a7f5-e7d11bb1e253 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['2fbf2c39-17e9-4e72-bbbb-e5125197536a']#033[00m Feb 1 04:58:42 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:42.269 2 INFO neutron.agent.securitygroups_rpc [None req-5822cc8d-7dc4-44de-b45f-4af6413189cc d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 225 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 13 KiB/s wr, 39 op/s Feb 1 04:58:42 localhost nova_compute[274317]: 2026-02-01 09:58:42.514 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:58:43 localhost podman[313845]: Feb 1 04:58:43 localhost podman[313845]: 2026-02-01 09:58:43.010684832 +0000 UTC m=+0.091566725 container create c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Feb 1 04:58:43 localhost systemd[1]: Started libpod-conmon-c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd.scope. Feb 1 04:58:43 localhost podman[313845]: 2026-02-01 09:58:42.963752189 +0000 UTC m=+0.044634062 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:43 localhost systemd[1]: Started libcrun container. Feb 1 04:58:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36507fc607162f25bf21bac759aebb58f629cfa9b69496d7bbd7d8684964b06/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:43 localhost podman[313845]: 2026-02-01 09:58:43.079239458 +0000 UTC m=+0.160121341 container init c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127) Feb 1 04:58:43 localhost podman[313845]: 2026-02-01 09:58:43.088640532 +0000 UTC m=+0.169522445 container start c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 1 04:58:43 localhost dnsmasq[313861]: started, version 2.85 cachesize 150 Feb 1 04:58:43 localhost dnsmasq[313861]: DNS service limited to local subnets Feb 1 04:58:43 localhost dnsmasq[313861]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:43 localhost dnsmasq[313861]: warning: no upstream servers configured Feb 1 04:58:43 localhost dnsmasq-dhcp[313861]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:58:43 localhost dnsmasq-dhcp[313861]: DHCP, static leases only on 10.100.0.16, lease time 1d Feb 1 04:58:43 localhost dnsmasq[313861]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 0 addresses Feb 1 04:58:43 localhost dnsmasq-dhcp[313861]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:43 localhost dnsmasq-dhcp[313861]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:43.147 259225 INFO neutron.agent.dhcp.agent [None req-6edf3a3b-2b93-4b5c-a396-6c031b435828 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:58:41Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d96f800e-edb5-41eb-98bd-c4af07c61ec1, ip_allocation=immediate, mac_address=fa:16:3e:05:18:80, name=tempest-PortsTestJSON-440048988, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:57:25Z, description=, dns_domain=, id=42a0a17b-be28-4b0f-b80f-055ba2c3d245, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1335101638, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39008, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=2183, status=ACTIVE, subnets=['0862afad-3da9-478d-89ae-76118bd953d3', '4a542e30-5baa-4eb1-8b2a-1d247e546755'], tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:39Z, vlan_transparent=None, network_id=42a0a17b-be28-4b0f-b80f-055ba2c3d245, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['2fbf2c39-17e9-4e72-bbbb-e5125197536a'], standard_attr_id=2588, status=DOWN, tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:41Z on network 42a0a17b-be28-4b0f-b80f-055ba2c3d245#033[00m Feb 1 04:58:43 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:43.303 2 INFO neutron.agent.securitygroups_rpc [None req-850b62dc-2adf-4cc9-b2ae-b1b62bc1910e d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:43.374 259225 INFO neutron.agent.dhcp.agent [None req-a432e735-9a53-4ae2-9db0-e08c38d94ace - - - - - -] DHCP configuration for ports {'f5db53be-fb30-4c27-aabf-1c052ca12256', 'ff54b909-b3b9-4669-8851-459606a86b19', 'bfd160dd-fa98-4ea9-815c-a97263cc82ea'} is completed#033[00m Feb 1 04:58:43 localhost dnsmasq[313861]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 1 addresses Feb 1 04:58:43 localhost dnsmasq-dhcp[313861]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:43 localhost dnsmasq-dhcp[313861]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:43 localhost podman[313879]: 2026-02-01 09:58:43.391390707 +0000 UTC m=+0.043123274 container kill c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 1 04:58:43 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:43.613 259225 INFO neutron.agent.dhcp.agent [None req-737b90dc-ecb4-47dc-976c-0b070bc84d65 - - - - - -] DHCP configuration for ports {'d96f800e-edb5-41eb-98bd-c4af07c61ec1'} is completed#033[00m Feb 1 04:58:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "48a610c1-90ab-40fc-9c1e-287cd3c7e703", "snap_name": "c067672e-1db0-4df2-a2d8-4c39c3271393_4eb61cba-a4e1-44c0-96fc-ccd97cf15833", "force": true, "format": "json"}]: dispatch Feb 1 04:58:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c067672e-1db0-4df2-a2d8-4c39c3271393_4eb61cba-a4e1-44c0-96fc-ccd97cf15833, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703/.meta.tmp' Feb 1 04:58:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703/.meta.tmp' to config b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703/.meta' Feb 1 04:58:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c067672e-1db0-4df2-a2d8-4c39c3271393_4eb61cba-a4e1-44c0-96fc-ccd97cf15833, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "48a610c1-90ab-40fc-9c1e-287cd3c7e703", "snap_name": "c067672e-1db0-4df2-a2d8-4c39c3271393", "force": true, "format": "json"}]: dispatch Feb 1 04:58:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c067672e-1db0-4df2-a2d8-4c39c3271393, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703/.meta.tmp' Feb 1 04:58:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703/.meta.tmp' to config b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703/.meta' Feb 1 04:58:43 localhost dnsmasq[313861]: exiting on receipt of SIGTERM Feb 1 04:58:43 localhost podman[313915]: 2026-02-01 09:58:43.873336087 +0000 UTC m=+0.074245475 container kill c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2) Feb 1 04:58:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c067672e-1db0-4df2-a2d8-4c39c3271393, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:43 localhost systemd[1]: libpod-c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd.scope: Deactivated successfully. Feb 1 04:58:43 localhost podman[313929]: 2026-02-01 09:58:43.939305083 +0000 UTC m=+0.056639176 container died c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:58:43 localhost podman[313929]: 2026-02-01 09:58:43.970091992 +0000 UTC m=+0.087426055 container cleanup c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:43 localhost systemd[1]: libpod-conmon-c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd.scope: Deactivated successfully. Feb 1 04:58:44 localhost systemd[1]: var-lib-containers-storage-overlay-e36507fc607162f25bf21bac759aebb58f629cfa9b69496d7bbd7d8684964b06-merged.mount: Deactivated successfully. Feb 1 04:58:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:44 localhost podman[313935]: 2026-02-01 09:58:44.03067009 +0000 UTC m=+0.136311449 container remove c4576f575873cc1f76777736374bbf588300bee42d59e599086b9776acbf62dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:44 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:44.184 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:f7:71 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae16595-a7d2-468e-9eb0-cb266a7101cb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ff54b909-b3b9-4669-8851-459606a86b19) old=Port_Binding(mac=['fa:16:3e:60:f7:71 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:44 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:44.188 158655 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ff54b909-b3b9-4669-8851-459606a86b19 in datapath 42a0a17b-be28-4b0f-b80f-055ba2c3d245 updated#033[00m Feb 1 04:58:44 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:44.192 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 428ddfe1-cc5b-46ff-b105-7d44dab1a252 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:58:44 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:44.192 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 42a0a17b-be28-4b0f-b80f-055ba2c3d245, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:58:44 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:44.194 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[84bc535e-a273-4730-9676-db309a4e70b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 225 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 10 KiB/s wr, 86 op/s Feb 1 04:58:44 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:44.428 2 INFO neutron.agent.securitygroups_rpc [None req-1da9737c-8fe2-425a-864f-b783c535a95b d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e196 e196: 6 total, 6 up, 6 in Feb 1 04:58:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:45 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:45.536 2 INFO neutron.agent.securitygroups_rpc [None req-16f44daa-a956-4040-8610-4ccc01164dad 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['57bcc40e-59d1-4274-96b5-b777fde58e85', '436843c7-c8bd-4657-a6cb-df8e0dddd33a', '2fbf2c39-17e9-4e72-bbbb-e5125197536a']#033[00m Feb 1 04:58:45 localhost podman[314008]: Feb 1 04:58:45 localhost podman[314008]: 2026-02-01 09:58:45.566594158 +0000 UTC m=+0.088231840 container create 6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true) Feb 1 04:58:45 localhost systemd[1]: Started libpod-conmon-6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22.scope. Feb 1 04:58:45 localhost systemd[1]: Started libcrun container. Feb 1 04:58:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/645b45bd538c7154c5e174dfa275410a59a91bbc0b2a58b48dfb47eec9e95ab1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:45 localhost podman[314008]: 2026-02-01 09:58:45.524324521 +0000 UTC m=+0.045962243 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:45 localhost podman[314008]: 2026-02-01 09:58:45.629013294 +0000 UTC m=+0.150650966 container init 6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 1 04:58:45 localhost podman[314008]: 2026-02-01 09:58:45.648264154 +0000 UTC m=+0.169901826 container start 6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Feb 1 04:58:45 localhost dnsmasq[314026]: started, version 2.85 cachesize 150 Feb 1 04:58:45 localhost dnsmasq[314026]: DNS service limited to local subnets Feb 1 04:58:45 localhost dnsmasq[314026]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:45 localhost dnsmasq[314026]: warning: no upstream servers configured Feb 1 04:58:45 localhost dnsmasq-dhcp[314026]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:58:45 localhost dnsmasq-dhcp[314026]: DHCP, static leases only on 10.100.0.16, lease time 1d Feb 1 04:58:45 localhost dnsmasq-dhcp[314026]: DHCP, static leases only on 10.100.0.32, lease time 1d Feb 1 04:58:45 localhost dnsmasq[314026]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 1 addresses Feb 1 04:58:45 localhost dnsmasq-dhcp[314026]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:45 localhost dnsmasq-dhcp[314026]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:45 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:45.725 259225 INFO neutron.agent.dhcp.agent [None req-a582b95a-0f98-4318-b6de-3aa83d0df205 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:58:41Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d96f800e-edb5-41eb-98bd-c4af07c61ec1, ip_allocation=immediate, mac_address=fa:16:3e:05:18:80, name=tempest-PortsTestJSON-1307719028, network_id=42a0a17b-be28-4b0f-b80f-055ba2c3d245, port_security_enabled=True, project_id=fc33978cc1c94009a152ec3cacbfe0e5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['436843c7-c8bd-4657-a6cb-df8e0dddd33a', '57bcc40e-59d1-4274-96b5-b777fde58e85'], standard_attr_id=2588, status=DOWN, tags=[], tenant_id=fc33978cc1c94009a152ec3cacbfe0e5, updated_at=2026-02-01T09:58:45Z on network 42a0a17b-be28-4b0f-b80f-055ba2c3d245#033[00m Feb 1 04:58:45 localhost dnsmasq-dhcp[314026]: DHCPRELEASE(tapbfd160dd-fa) 10.100.0.14 fa:16:3e:05:18:80 Feb 1 04:58:45 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:45.930 259225 INFO neutron.agent.dhcp.agent [None req-2d1e2e7a-d2ad-402c-aeae-fa7d4398fffd - - - - - -] DHCP configuration for ports {'d96f800e-edb5-41eb-98bd-c4af07c61ec1', 'ff54b909-b3b9-4669-8851-459606a86b19', 'bfd160dd-fa98-4ea9-815c-a97263cc82ea', 'f5db53be-fb30-4c27-aabf-1c052ca12256'} is completed#033[00m Feb 1 04:58:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e197 e197: 6 total, 6 up, 6 in Feb 1 04:58:46 localhost nova_compute[274317]: 2026-02-01 09:58:46.264 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 225 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 10 KiB/s wr, 86 op/s Feb 1 04:58:46 localhost dnsmasq[314026]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 1 addresses Feb 1 04:58:46 localhost dnsmasq-dhcp[314026]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:46 localhost dnsmasq-dhcp[314026]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:46 localhost podman[314046]: 2026-02-01 09:58:46.377281365 +0000 UTC m=+0.058712831 container kill 6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:46 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:46.565 259225 INFO neutron.agent.dhcp.agent [None req-b9eb3ad3-2aff-427a-87fc-b34af5af4f64 - - - - - -] DHCP configuration for ports {'d96f800e-edb5-41eb-98bd-c4af07c61ec1'} is completed#033[00m Feb 1 04:58:46 localhost systemd[1]: tmp-crun.zFj9xY.mount: Deactivated successfully. Feb 1 04:58:46 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:46.622 2 INFO neutron.agent.securitygroups_rpc [None req-5515d09c-f3a8-4bc8-bfd1-9ff7fafccb05 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['57bcc40e-59d1-4274-96b5-b777fde58e85', '436843c7-c8bd-4657-a6cb-df8e0dddd33a']#033[00m Feb 1 04:58:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e198 e198: 6 total, 6 up, 6 in Feb 1 04:58:46 localhost dnsmasq[314026]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 0 addresses Feb 1 04:58:46 localhost dnsmasq-dhcp[314026]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:46 localhost podman[314083]: 2026-02-01 09:58:46.949383284 +0000 UTC m=+0.051582528 container kill 6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:58:46 localhost dnsmasq-dhcp[314026]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:47 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48a610c1-90ab-40fc-9c1e-287cd3c7e703", "format": "json"}]: dispatch Feb 1 04:58:47 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:58:47 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:58:47 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:47.175+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48a610c1-90ab-40fc-9c1e-287cd3c7e703' of type subvolume Feb 1 04:58:47 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48a610c1-90ab-40fc-9c1e-287cd3c7e703' of type subvolume Feb 1 04:58:47 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48a610c1-90ab-40fc-9c1e-287cd3c7e703", "force": true, "format": "json"}]: dispatch Feb 1 04:58:47 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:47 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/48a610c1-90ab-40fc-9c1e-287cd3c7e703'' moved to trashcan Feb 1 04:58:47 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:58:47 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48a610c1-90ab-40fc-9c1e-287cd3c7e703, vol_name:cephfs) < "" Feb 1 04:58:47 localhost nova_compute[274317]: 2026-02-01 09:58:47.517 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e199 e199: 6 total, 6 up, 6 in Feb 1 04:58:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 225 MiB data, 1015 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 20 KiB/s wr, 71 op/s Feb 1 04:58:48 localhost dnsmasq[314026]: exiting on receipt of SIGTERM Feb 1 04:58:48 localhost podman[314120]: 2026-02-01 09:58:48.425604931 +0000 UTC m=+0.058985509 container kill 6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:58:48 localhost systemd[1]: libpod-6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22.scope: Deactivated successfully. Feb 1 04:58:48 localhost podman[314132]: 2026-02-01 09:58:48.499944408 +0000 UTC m=+0.058280977 container died 6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 1 04:58:48 localhost podman[314132]: 2026-02-01 09:58:48.530328225 +0000 UTC m=+0.088664734 container cleanup 6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:58:48 localhost systemd[1]: libpod-conmon-6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22.scope: Deactivated successfully. Feb 1 04:58:48 localhost podman[314135]: 2026-02-01 09:58:48.570740455 +0000 UTC m=+0.122958943 container remove 6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 1 04:58:48 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:48.656 2 INFO neutron.agent.securitygroups_rpc [None req-ecc18215-6c26-46d4-bb1e-4f9b7e08ab87 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:48 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:48.800 259225 INFO neutron.agent.linux.ip_lib [None req-76446668-2ac3-4c38-a599-bbabe26e3095 - - - - - -] Device tap5b4ba3e8-5f cannot be used as it has no MAC address#033[00m Feb 1 04:58:48 localhost nova_compute[274317]: 2026-02-01 09:58:48.869 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:48 localhost kernel: device tap5b4ba3e8-5f entered promiscuous mode Feb 1 04:58:48 localhost NetworkManager[5972]: [1769939928.8779] manager: (tap5b4ba3e8-5f): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Feb 1 04:58:48 localhost ovn_controller[152787]: 2026-02-01T09:58:48Z|00219|binding|INFO|Claiming lport 5b4ba3e8-5f00-4316-a86d-1c06057933fc for this chassis. Feb 1 04:58:48 localhost nova_compute[274317]: 2026-02-01 09:58:48.878 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:48 localhost ovn_controller[152787]: 2026-02-01T09:58:48Z|00220|binding|INFO|5b4ba3e8-5f00-4316-a86d-1c06057933fc: Claiming unknown Feb 1 04:58:48 localhost systemd-udevd[314189]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:58:48 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:48.898 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-efcb439d-5008-49b3-8d74-fc95eb1e0a3c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efcb439d-5008-49b3-8d74-fc95eb1e0a3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e1ea1a33e554968ba8ebaf6753c9c5d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33e912d1-6794-400a-b37c-704b8b53759d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5b4ba3e8-5f00-4316-a86d-1c06057933fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:48 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:48.905 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 5b4ba3e8-5f00-4316-a86d-1c06057933fc in datapath efcb439d-5008-49b3-8d74-fc95eb1e0a3c bound to our chassis#033[00m Feb 1 04:58:48 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:48.909 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port ce3bc52f-aef5-49c7-bb45-7d0cc76cd721 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 04:58:48 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:48.910 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efcb439d-5008-49b3-8d74-fc95eb1e0a3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:58:48 localhost journal[224955]: ethtool ioctl error on tap5b4ba3e8-5f: No such device Feb 1 04:58:48 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:48.911 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[16d0a980-ac30-4af0-b1db-bca7758d809e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:48 localhost ovn_controller[152787]: 2026-02-01T09:58:48Z|00221|binding|INFO|Setting lport 5b4ba3e8-5f00-4316-a86d-1c06057933fc ovn-installed in OVS Feb 1 04:58:48 localhost ovn_controller[152787]: 2026-02-01T09:58:48Z|00222|binding|INFO|Setting lport 5b4ba3e8-5f00-4316-a86d-1c06057933fc up in Southbound Feb 1 04:58:48 localhost nova_compute[274317]: 2026-02-01 09:58:48.916 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:48 localhost journal[224955]: ethtool ioctl error on tap5b4ba3e8-5f: No such device Feb 1 04:58:48 localhost journal[224955]: ethtool ioctl error on tap5b4ba3e8-5f: No such device Feb 1 04:58:48 localhost journal[224955]: ethtool ioctl error on tap5b4ba3e8-5f: No such device Feb 1 04:58:48 localhost journal[224955]: ethtool ioctl error on tap5b4ba3e8-5f: No such device Feb 1 04:58:48 localhost journal[224955]: ethtool ioctl error on tap5b4ba3e8-5f: No such device Feb 1 04:58:48 localhost journal[224955]: ethtool ioctl error on tap5b4ba3e8-5f: No such device Feb 1 04:58:48 localhost journal[224955]: ethtool ioctl error on tap5b4ba3e8-5f: No such device Feb 1 04:58:48 localhost nova_compute[274317]: 2026-02-01 09:58:48.956 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:48 localhost nova_compute[274317]: 2026-02-01 09:58:48.985 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e200 e200: 6 total, 6 up, 6 in Feb 1 04:58:49 localhost systemd[1]: var-lib-containers-storage-overlay-645b45bd538c7154c5e174dfa275410a59a91bbc0b2a58b48dfb47eec9e95ab1-merged.mount: Deactivated successfully. Feb 1 04:58:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d36c0b9e0b2c41599e6947f933975a9f4b52579fce13dde334b1efc9e87ba22-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:49 localhost podman[314267]: Feb 1 04:58:49 localhost podman[314267]: 2026-02-01 09:58:49.531516678 +0000 UTC m=+0.085382322 container create 342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 1 04:58:49 localhost systemd[1]: Started libpod-conmon-342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893.scope. Feb 1 04:58:49 localhost podman[314267]: 2026-02-01 09:58:49.490576092 +0000 UTC m=+0.044441726 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:49 localhost systemd[1]: Started libcrun container. Feb 1 04:58:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5557d9afffbbda1966ecc891e6b582c678fb403d0ebb1224f5064084d81706b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:49 localhost podman[314267]: 2026-02-01 09:58:49.610472329 +0000 UTC m=+0.164338003 container init 342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 1 04:58:49 localhost podman[314267]: 2026-02-01 09:58:49.620395177 +0000 UTC m=+0.174260821 container start 342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 1 04:58:49 localhost dnsmasq[314287]: started, version 2.85 cachesize 150 Feb 1 04:58:49 localhost dnsmasq[314287]: DNS service limited to local subnets Feb 1 04:58:49 localhost dnsmasq[314287]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:49 localhost dnsmasq[314287]: warning: no upstream servers configured Feb 1 04:58:49 localhost dnsmasq-dhcp[314287]: DHCP, static leases only on 10.100.0.16, lease time 1d Feb 1 04:58:49 localhost dnsmasq-dhcp[314287]: DHCP, static leases only on 10.100.0.32, lease time 1d Feb 1 04:58:49 localhost dnsmasq[314287]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/addn_hosts - 0 addresses Feb 1 04:58:49 localhost dnsmasq-dhcp[314287]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/host Feb 1 04:58:49 localhost dnsmasq-dhcp[314287]: read /var/lib/neutron/dhcp/42a0a17b-be28-4b0f-b80f-055ba2c3d245/opts Feb 1 04:58:49 localhost podman[314310]: Feb 1 04:58:49 localhost podman[314310]: 2026-02-01 09:58:49.857101955 +0000 UTC m=+0.092302848 container create b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 1 04:58:49 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:49.863 259225 INFO neutron.agent.dhcp.agent [None req-3c3fc470-2c5a-469e-a56b-8b233ecc51dd - - - - - -] DHCP configuration for ports {'f5db53be-fb30-4c27-aabf-1c052ca12256', 'ff54b909-b3b9-4669-8851-459606a86b19', 'bfd160dd-fa98-4ea9-815c-a97263cc82ea'} is completed#033[00m Feb 1 04:58:49 localhost systemd[1]: Started libpod-conmon-b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8.scope. Feb 1 04:58:49 localhost systemd[1]: Started libcrun container. Feb 1 04:58:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d531bacba67f6f62eddd0d069e5b217cc14d9a3f2a39da03970900ebf0f5bfb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:49 localhost podman[314310]: 2026-02-01 09:58:49.813252008 +0000 UTC m=+0.048452931 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:49 localhost podman[314310]: 2026-02-01 09:58:49.920496221 +0000 UTC m=+0.155697114 container init b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:58:49 localhost podman[314310]: 2026-02-01 09:58:49.929277444 +0000 UTC m=+0.164478337 container start b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 1 04:58:49 localhost dnsmasq[314345]: started, version 2.85 cachesize 150 Feb 1 04:58:49 localhost dnsmasq[314345]: DNS service limited to local subnets Feb 1 04:58:49 localhost dnsmasq[314345]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:49 localhost dnsmasq[314345]: warning: no upstream servers configured Feb 1 04:58:49 localhost dnsmasq-dhcp[314345]: DHCP, static leases only on 10.102.0.0, lease time 1d Feb 1 04:58:49 localhost dnsmasq[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/addn_hosts - 0 addresses Feb 1 04:58:49 localhost dnsmasq-dhcp[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/host Feb 1 04:58:49 localhost dnsmasq-dhcp[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/opts Feb 1 04:58:49 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:49.988 259225 INFO neutron.agent.dhcp.agent [None req-978517d7-3ec8-4b39-a6ce-32b689987fac - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:58:48Z, description=, device_id=0d3b238e-083b-4f7c-8e29-650b41019987, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cd504d75-8802-4bc4-9b4d-ae05b1c87cde, ip_allocation=immediate, mac_address=fa:16:3e:44:ce:ea, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:58:45Z, description=, dns_domain=, id=efcb439d-5008-49b3-8d74-fc95eb1e0a3c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-544404190, port_security_enabled=True, project_id=3e1ea1a33e554968ba8ebaf6753c9c5d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18623, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2617, status=ACTIVE, subnets=['ffc0767c-8f10-49f5-8670-ad4918ca881f'], tags=[], tenant_id=3e1ea1a33e554968ba8ebaf6753c9c5d, updated_at=2026-02-01T09:58:46Z, vlan_transparent=None, network_id=efcb439d-5008-49b3-8d74-fc95eb1e0a3c, port_security_enabled=False, project_id=3e1ea1a33e554968ba8ebaf6753c9c5d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2637, status=DOWN, tags=[], tenant_id=3e1ea1a33e554968ba8ebaf6753c9c5d, updated_at=2026-02-01T09:58:48Z on network efcb439d-5008-49b3-8d74-fc95eb1e0a3c#033[00m Feb 1 04:58:49 localhost dnsmasq[314287]: exiting on receipt of SIGTERM Feb 1 04:58:49 localhost podman[314344]: 2026-02-01 09:58:49.992010889 +0000 UTC m=+0.055237623 container kill 342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:58:49 localhost systemd[1]: libpod-342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893.scope: Deactivated successfully. Feb 1 04:58:50 localhost podman[314357]: 2026-02-01 09:58:50.04595553 +0000 UTC m=+0.040326267 container died 342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 04:58:50 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:50.104 259225 INFO neutron.agent.dhcp.agent [None req-798dcf21-10f7-4903-9e17-f73e973d673c - - - - - -] DHCP configuration for ports {'b3a9245d-7b09-4ca7-945e-fb5d33a48fc0'} is completed#033[00m Feb 1 04:58:50 localhost podman[314357]: 2026-02-01 09:58:50.134258583 +0000 UTC m=+0.128629310 container cleanup 342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 1 04:58:50 localhost systemd[1]: libpod-conmon-342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893.scope: Deactivated successfully. Feb 1 04:58:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:50 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:50.152 2 INFO neutron.agent.securitygroups_rpc [None req-421b8c9c-f64a-4a6d-8e56-2aad4effb91c 80bbd13fa0544ff98e6c38448e01c054 fc33978cc1c94009a152ec3cacbfe0e5 - - default default] Security group member updated ['277d73b7-d267-437d-b5df-bd560d180a7a']#033[00m Feb 1 04:58:50 localhost podman[314359]: 2026-02-01 09:58:50.161798101 +0000 UTC m=+0.148744306 container remove 342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42a0a17b-be28-4b0f-b80f-055ba2c3d245, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127) Feb 1 04:58:50 localhost nova_compute[274317]: 2026-02-01 09:58:50.171 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:50 localhost ovn_controller[152787]: 2026-02-01T09:58:50Z|00223|binding|INFO|Releasing lport bfd160dd-fa98-4ea9-815c-a97263cc82ea from this chassis (sb_readonly=0) Feb 1 04:58:50 localhost ovn_controller[152787]: 2026-02-01T09:58:50Z|00224|binding|INFO|Setting lport bfd160dd-fa98-4ea9-815c-a97263cc82ea down in Southbound Feb 1 04:58:50 localhost kernel: device tapbfd160dd-fa left promiscuous mode Feb 1 04:58:50 localhost nova_compute[274317]: 2026-02-01 09:58:50.189 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:50.192 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28 10.100.0.35/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42a0a17b-be28-4b0f-b80f-055ba2c3d245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc33978cc1c94009a152ec3cacbfe0e5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae16595-a7d2-468e-9eb0-cb266a7101cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=bfd160dd-fa98-4ea9-815c-a97263cc82ea) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:50.194 158655 INFO neutron.agent.ovn.metadata.agent [-] Port bfd160dd-fa98-4ea9-815c-a97263cc82ea in datapath 42a0a17b-be28-4b0f-b80f-055ba2c3d245 unbound from our chassis#033[00m Feb 1 04:58:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:50.197 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 42a0a17b-be28-4b0f-b80f-055ba2c3d245, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:58:50 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:50.198 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ec733fe7-a9da-4f2d-a719-2c1dc2d9c25f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:50 localhost dnsmasq[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/addn_hosts - 1 addresses Feb 1 04:58:50 localhost dnsmasq-dhcp[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/host Feb 1 04:58:50 localhost podman[314403]: 2026-02-01 09:58:50.200073563 +0000 UTC m=+0.065943045 container kill b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127) Feb 1 04:58:50 localhost dnsmasq-dhcp[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/opts Feb 1 04:58:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 225 MiB data, 1015 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 19 KiB/s wr, 67 op/s Feb 1 04:58:50 localhost systemd[1]: tmp-crun.idLqLe.mount: Deactivated successfully. Feb 1 04:58:50 localhost systemd[1]: var-lib-containers-storage-overlay-5557d9afffbbda1966ecc891e6b582c678fb403d0ebb1224f5064084d81706b3-merged.mount: Deactivated successfully. Feb 1 04:58:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-342afcda5f7aeabf1c3f9435a3d9a5e21d4af63074651e4b683430042b8a0893-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:50 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:50.494 259225 INFO neutron.agent.dhcp.agent [None req-23abf485-3fef-423e-93c0-14affb3f57db - - - - - -] DHCP configuration for ports {'cd504d75-8802-4bc4-9b4d-ae05b1c87cde'} is completed#033[00m Feb 1 04:58:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e", "format": "json"}]: dispatch Feb 1 04:58:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:58:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:58:50 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e' of type subvolume Feb 1 04:58:50 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:58:50.518+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e' of type subvolume Feb 1 04:58:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e", "force": true, "format": "json"}]: dispatch Feb 1 04:58:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e, vol_name:cephfs) < "" Feb 1 04:58:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e'' moved to trashcan Feb 1 04:58:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:58:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a00bb56c-4e4f-4bbc-b0ef-d7713d639d3e, vol_name:cephfs) < "" Feb 1 04:58:50 localhost systemd[1]: run-netns-qdhcp\x2d42a0a17b\x2dbe28\x2d4b0f\x2db80f\x2d055ba2c3d245.mount: Deactivated successfully. Feb 1 04:58:50 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:50.693 259225 INFO neutron.agent.dhcp.agent [None req-13d0cffd-6500-4d98-b5d3-732cfd6a9d8f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:50 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:50.694 259225 INFO neutron.agent.dhcp.agent [None req-13d0cffd-6500-4d98-b5d3-732cfd6a9d8f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:50 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:50.818 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:51 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:51.088 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:51 localhost nova_compute[274317]: 2026-02-01 09:58:51.117 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:51 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:51.132 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T09:58:48Z, description=, device_id=0d3b238e-083b-4f7c-8e29-650b41019987, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cd504d75-8802-4bc4-9b4d-ae05b1c87cde, ip_allocation=immediate, mac_address=fa:16:3e:44:ce:ea, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T09:58:45Z, description=, dns_domain=, id=efcb439d-5008-49b3-8d74-fc95eb1e0a3c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-544404190, port_security_enabled=True, project_id=3e1ea1a33e554968ba8ebaf6753c9c5d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18623, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2617, status=ACTIVE, subnets=['ffc0767c-8f10-49f5-8670-ad4918ca881f'], tags=[], tenant_id=3e1ea1a33e554968ba8ebaf6753c9c5d, updated_at=2026-02-01T09:58:46Z, vlan_transparent=None, network_id=efcb439d-5008-49b3-8d74-fc95eb1e0a3c, port_security_enabled=False, project_id=3e1ea1a33e554968ba8ebaf6753c9c5d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2637, status=DOWN, tags=[], tenant_id=3e1ea1a33e554968ba8ebaf6753c9c5d, updated_at=2026-02-01T09:58:48Z on network efcb439d-5008-49b3-8d74-fc95eb1e0a3c#033[00m Feb 1 04:58:51 localhost nova_compute[274317]: 2026-02-01 09:58:51.265 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:51 localhost dnsmasq[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/addn_hosts - 1 addresses Feb 1 04:58:51 localhost dnsmasq-dhcp[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/host Feb 1 04:58:51 localhost podman[314442]: 2026-02-01 09:58:51.371070308 +0000 UTC m=+0.074187983 container kill b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2) Feb 1 04:58:51 localhost dnsmasq-dhcp[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/opts Feb 1 04:58:51 localhost nova_compute[274317]: 2026-02-01 09:58:51.371 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:58:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:58:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:58:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:58:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:58:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:58:51 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:51.704 259225 INFO neutron.agent.dhcp.agent [None req-051c8c14-9127-4216-a252-4b018ec392ac - - - - - -] DHCP configuration for ports {'cd504d75-8802-4bc4-9b4d-ae05b1c87cde'} is completed#033[00m Feb 1 04:58:51 localhost nova_compute[274317]: 2026-02-01 09:58:51.733 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e201 e201: 6 total, 6 up, 6 in Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.114 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:58:52 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:52.182 259225 INFO neutron.agent.linux.ip_lib [None req-f5ca7c42-845e-41dc-9662-6fca0cc63260 - - - - - -] Device tap99a3af7a-2f cannot be used as it has no MAC address#033[00m Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.205 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:52 localhost kernel: device tap99a3af7a-2f entered promiscuous mode Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.214 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:52 localhost NetworkManager[5972]: [1769939932.2145] manager: (tap99a3af7a-2f): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Feb 1 04:58:52 localhost ovn_controller[152787]: 2026-02-01T09:58:52Z|00225|binding|INFO|Claiming lport 99a3af7a-2f31-45d3-a12b-e57ea71be76c for this chassis. Feb 1 04:58:52 localhost ovn_controller[152787]: 2026-02-01T09:58:52Z|00226|binding|INFO|99a3af7a-2f31-45d3-a12b-e57ea71be76c: Claiming unknown Feb 1 04:58:52 localhost systemd-udevd[314474]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:58:52 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:52.229 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-fcb235a7-3377-4ca7-8f52-37430165d9d4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcb235a7-3377-4ca7-8f52-37430165d9d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c23ed2a9641444eeac6ffb9689135326', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc4e7855-d30d-41d2-9b5d-873555255c0d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=99a3af7a-2f31-45d3-a12b-e57ea71be76c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:52 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:52.231 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 99a3af7a-2f31-45d3-a12b-e57ea71be76c in datapath fcb235a7-3377-4ca7-8f52-37430165d9d4 bound to our chassis#033[00m Feb 1 04:58:52 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:52.233 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fcb235a7-3377-4ca7-8f52-37430165d9d4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:58:52 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:52.235 2 INFO neutron.agent.securitygroups_rpc [None req-1cc83620-5d6e-42d2-912e-0b7270f21e87 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:58:52 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:52.234 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[3f28edb0-1c9e-4e3b-84e5-0fd338b966fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:52 localhost journal[224955]: ethtool ioctl error on tap99a3af7a-2f: No such device Feb 1 04:58:52 localhost ovn_controller[152787]: 2026-02-01T09:58:52Z|00227|binding|INFO|Setting lport 99a3af7a-2f31-45d3-a12b-e57ea71be76c ovn-installed in OVS Feb 1 04:58:52 localhost ovn_controller[152787]: 2026-02-01T09:58:52Z|00228|binding|INFO|Setting lport 99a3af7a-2f31-45d3-a12b-e57ea71be76c up in Southbound Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.247 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:52 localhost journal[224955]: ethtool ioctl error on tap99a3af7a-2f: No such device Feb 1 04:58:52 localhost journal[224955]: ethtool ioctl error on tap99a3af7a-2f: No such device Feb 1 04:58:52 localhost journal[224955]: ethtool ioctl error on tap99a3af7a-2f: No such device Feb 1 04:58:52 localhost journal[224955]: ethtool ioctl error on tap99a3af7a-2f: No such device Feb 1 04:58:52 localhost journal[224955]: ethtool ioctl error on tap99a3af7a-2f: No such device Feb 1 04:58:52 localhost journal[224955]: ethtool ioctl error on tap99a3af7a-2f: No such device Feb 1 04:58:52 localhost journal[224955]: ethtool ioctl error on tap99a3af7a-2f: No such device Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.289 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 225 MiB data, 1015 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 15 KiB/s wr, 51 op/s Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.318 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:52 localhost nova_compute[274317]: 2026-02-01 09:58:52.522 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:53 localhost podman[314543]: Feb 1 04:58:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:53.116 158655 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 2968b9f2-d43d-41e6-908a-fdd86fd98b2c with type ""#033[00m Feb 1 04:58:53 localhost ovn_controller[152787]: 2026-02-01T09:58:53Z|00229|binding|INFO|Removing iface tap99a3af7a-2f ovn-installed in OVS Feb 1 04:58:53 localhost ovn_controller[152787]: 2026-02-01T09:58:53Z|00230|binding|INFO|Removing lport 99a3af7a-2f31-45d3-a12b-e57ea71be76c ovn-installed in OVS Feb 1 04:58:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:53.118 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-fcb235a7-3377-4ca7-8f52-37430165d9d4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcb235a7-3377-4ca7-8f52-37430165d9d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c23ed2a9641444eeac6ffb9689135326', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc4e7855-d30d-41d2-9b5d-873555255c0d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=99a3af7a-2f31-45d3-a12b-e57ea71be76c) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.119 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.119 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.120 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:58:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:53.120 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 99a3af7a-2f31-45d3-a12b-e57ea71be76c in datapath fcb235a7-3377-4ca7-8f52-37430165d9d4 unbound from our chassis#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.120 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.121 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:58:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:53.121 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fcb235a7-3377-4ca7-8f52-37430165d9d4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:58:53 localhost ovn_metadata_agent[158650]: 2026-02-01 09:58:53.122 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[9a607e6f-c1ed-415d-bce8-e305aa9c0f74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:58:53 localhost podman[314543]: 2026-02-01 09:58:53.12660156 +0000 UTC m=+0.090573133 container create 8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fcb235a7-3377-4ca7-8f52-37430165d9d4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.150 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:53 localhost systemd[1]: Started libpod-conmon-8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9.scope. Feb 1 04:58:53 localhost podman[314543]: 2026-02-01 09:58:53.081815384 +0000 UTC m=+0.045786997 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:58:53 localhost systemd[1]: tmp-crun.zbMqq4.mount: Deactivated successfully. Feb 1 04:58:53 localhost systemd[1]: Started libcrun container. Feb 1 04:58:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b79e32ec6f1286ebc23bc523a5ce9ad71e572897b6cdcd2a4d0a356ebe608b9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:58:53 localhost podman[314543]: 2026-02-01 09:58:53.200831444 +0000 UTC m=+0.164803037 container init 8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fcb235a7-3377-4ca7-8f52-37430165d9d4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 1 04:58:53 localhost podman[314543]: 2026-02-01 09:58:53.209788603 +0000 UTC m=+0.173760186 container start 8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fcb235a7-3377-4ca7-8f52-37430165d9d4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:58:53 localhost dnsmasq[314563]: started, version 2.85 cachesize 150 Feb 1 04:58:53 localhost dnsmasq[314563]: DNS service limited to local subnets Feb 1 04:58:53 localhost dnsmasq[314563]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:58:53 localhost dnsmasq[314563]: warning: no upstream servers configured Feb 1 04:58:53 localhost dnsmasq-dhcp[314563]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:58:53 localhost dnsmasq[314563]: read /var/lib/neutron/dhcp/fcb235a7-3377-4ca7-8f52-37430165d9d4/addn_hosts - 0 addresses Feb 1 04:58:53 localhost dnsmasq-dhcp[314563]: read /var/lib/neutron/dhcp/fcb235a7-3377-4ca7-8f52-37430165d9d4/host Feb 1 04:58:53 localhost dnsmasq-dhcp[314563]: read /var/lib/neutron/dhcp/fcb235a7-3377-4ca7-8f52-37430165d9d4/opts Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.344 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:53.351 259225 INFO neutron.agent.dhcp.agent [None req-07c7ae8a-ff69-49b5-84a1-29eb4b24381b - - - - - -] DHCP configuration for ports {'cd20fa64-7f72-4e68-9041-518ea56bf3ef'} is completed#033[00m Feb 1 04:58:53 localhost dnsmasq[314563]: exiting on receipt of SIGTERM Feb 1 04:58:53 localhost podman[314598]: 2026-02-01 09:58:53.433892367 +0000 UTC m=+0.066083781 container kill 8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fcb235a7-3377-4ca7-8f52-37430165d9d4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:58:53 localhost systemd[1]: libpod-8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9.scope: Deactivated successfully. Feb 1 04:58:53 localhost podman[314612]: 2026-02-01 09:58:53.506804099 +0000 UTC m=+0.058287437 container died 8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fcb235a7-3377-4ca7-8f52-37430165d9d4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:58:53 localhost podman[314612]: 2026-02-01 09:58:53.545481705 +0000 UTC m=+0.096965023 container cleanup 8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fcb235a7-3377-4ca7-8f52-37430165d9d4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 04:58:53 localhost systemd[1]: libpod-conmon-8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9.scope: Deactivated successfully. Feb 1 04:58:53 localhost podman[314614]: 2026-02-01 09:58:53.580458305 +0000 UTC m=+0.126086231 container remove 8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fcb235a7-3377-4ca7-8f52-37430165d9d4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.595 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:53 localhost kernel: device tap99a3af7a-2f left promiscuous mode Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.609 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:53.626 259225 INFO neutron.agent.dhcp.agent [None req-945c7240-0ff5-49d0-9867-cd36ec058158 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:58:53.627 259225 INFO neutron.agent.dhcp.agent [None req-945c7240-0ff5-49d0-9867-cd36ec058158 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:58:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:58:53 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2759869622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.678 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.876 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.878 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11556MB free_disk=41.70010757446289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.878 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.879 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.934 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.935 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:58:53 localhost nova_compute[274317]: 2026-02-01 09:58:53.964 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:58:54 localhost podman[314643]: 2026-02-01 09:58:54.113206748 +0000 UTC m=+0.074825303 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 04:58:54 localhost podman[314643]: 2026-02-01 09:58:54.124653625 +0000 UTC m=+0.086272180 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:58:54 localhost systemd[1]: var-lib-containers-storage-overlay-7b79e32ec6f1286ebc23bc523a5ce9ad71e572897b6cdcd2a4d0a356ebe608b9-merged.mount: Deactivated successfully. Feb 1 04:58:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8afd2e261a286fa4c5971f6f71f6c9a1bf8f069459ff7a3b28c27cfbff2ff1e9-userdata-shm.mount: Deactivated successfully. Feb 1 04:58:54 localhost systemd[1]: run-netns-qdhcp\x2dfcb235a7\x2d3377\x2d4ca7\x2d8f52\x2d37430165d9d4.mount: Deactivated successfully. Feb 1 04:58:54 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:58:54 localhost podman[314685]: 2026-02-01 09:58:54.235158819 +0000 UTC m=+0.086993222 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:58:54 localhost podman[314685]: 2026-02-01 09:58:54.273628577 +0000 UTC m=+0.125462930 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:58:54 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:58:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 225 MiB data, 1015 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 10 KiB/s wr, 66 op/s Feb 1 04:58:54 localhost systemd[1]: tmp-crun.N8CpoP.mount: Deactivated successfully. Feb 1 04:58:54 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:58:54 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2177280335' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:58:54 localhost podman[314710]: 2026-02-01 09:58:54.393674719 +0000 UTC m=+0.095666063 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, io.buildah.version=1.33.7, version=9.7, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container) Feb 1 04:58:54 localhost nova_compute[274317]: 2026-02-01 09:58:54.401 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:58:54 localhost nova_compute[274317]: 2026-02-01 09:58:54.408 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:58:54 localhost podman[314710]: 2026-02-01 09:58:54.417657056 +0000 UTC m=+0.119648380 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, release=1769056855, architecture=x86_64, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-01-22T05:09:47Z, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.7) Feb 1 04:58:54 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:58:54 localhost nova_compute[274317]: 2026-02-01 09:58:54.430 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:58:54 localhost nova_compute[274317]: 2026-02-01 09:58:54.484 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:58:54 localhost nova_compute[274317]: 2026-02-01 09:58:54.485 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:58:54 localhost podman[314728]: 2026-02-01 09:58:54.491117226 +0000 UTC m=+0.088225740 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true) Feb 1 04:58:54 localhost podman[314728]: 2026-02-01 09:58:54.497566026 +0000 UTC m=+0.094674580 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:58:54 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:58:54 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:54.946 2 INFO neutron.agent.securitygroups_rpc [req-6a252779-d066-42d3-937d-d19d3be50ea7 req-0e4e2316-508a-44f2-8edc-582240eab0d7 afad352e9d664799bf5de0cadcf3c7cd ff200d66c230435098f5a0489bf1e8f7 - - default default] Security group member updated ['95400daf-a74d-4007-ac5f-e79aa8e5c1cd']#033[00m Feb 1 04:58:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:58:55 localhost dnsmasq[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/addn_hosts - 1 addresses Feb 1 04:58:55 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/host Feb 1 04:58:55 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/opts Feb 1 04:58:55 localhost podman[314766]: 2026-02-01 09:58:55.195152367 +0000 UTC m=+0.060039082 container kill 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 1 04:58:55 localhost nova_compute[274317]: 2026-02-01 09:58:55.486 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:55 localhost nova_compute[274317]: 2026-02-01 09:58:55.486 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:56 localhost nova_compute[274317]: 2026-02-01 09:58:56.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:56 localhost nova_compute[274317]: 2026-02-01 09:58:56.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:56 localhost nova_compute[274317]: 2026-02-01 09:58:56.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:58:56 localhost nova_compute[274317]: 2026-02-01 09:58:56.268 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 225 MiB data, 1015 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 8.0 KiB/s wr, 51 op/s Feb 1 04:58:56 localhost nova_compute[274317]: 2026-02-01 09:58:56.332 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e202 e202: 6 total, 6 up, 6 in Feb 1 04:58:56 localhost ovn_controller[152787]: 2026-02-01T09:58:56Z|00231|memory|INFO|peak resident set size grew 52% in last 2644.0 seconds, from 14972 kB to 22796 kB Feb 1 04:58:56 localhost ovn_controller[152787]: 2026-02-01T09:58:56Z|00232|memory|INFO|idl-cells-OVN_Southbound:8429 idl-cells-Open_vSwitch:1155 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:221 lflow-cache-entries-cache-matches:271 lflow-cache-size-KB:903 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:474 ofctrl_installed_flow_usage-KB:346 ofctrl_sb_flow_ref_usage-KB:180 Feb 1 04:58:57 localhost nova_compute[274317]: 2026-02-01 09:58:57.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:58:57 localhost nova_compute[274317]: 2026-02-01 09:58:57.525 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:58:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 146 MiB data, 875 MiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 12 KiB/s wr, 84 op/s Feb 1 04:58:58 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:58.561 2 INFO neutron.agent.securitygroups_rpc [None req-fa4d9652-c836-4282-9ccf-0ce3333cfc7d 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['9475ea4c-43e5-4601-aa09-56b92b5b1098']#033[00m Feb 1 04:58:58 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:58.895 2 INFO neutron.agent.securitygroups_rpc [None req-30e16878-3a93-4ab5-adf2-36d2809551ae 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['9475ea4c-43e5-4601-aa09-56b92b5b1098']#033[00m Feb 1 04:58:59 localhost neutron_sriov_agent[252054]: 2026-02-01 09:58:59.218 2 INFO neutron.agent.securitygroups_rpc [None req-da96df46-f62f-42b1-9cd1-d7e6a5376fca d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['5471bfa5-0ba1-439c-b208-7f1eef47ebe2']#033[00m Feb 1 04:58:59 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e203 e203: 6 total, 6 up, 6 in Feb 1 04:59:00 localhost podman[236852]: time="2026-02-01T09:59:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:59:00 localhost podman[236852]: @ - - [01/Feb/2026:09:59:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160831 "" "Go-http-client/1.1" Feb 1 04:59:00 localhost podman[236852]: @ - - [01/Feb/2026:09:59:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19719 "" "Go-http-client/1.1" Feb 1 04:59:00 localhost nova_compute[274317]: 2026-02-01 09:59:00.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:59:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 146 MiB data, 875 MiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 12 KiB/s wr, 84 op/s Feb 1 04:59:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e204 e204: 6 total, 6 up, 6 in Feb 1 04:59:01 localhost nova_compute[274317]: 2026-02-01 09:59:01.272 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:01 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:01.349 2 INFO neutron.agent.securitygroups_rpc [None req-c763a1d5-029b-4398-9f3b-b445d5b844aa d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['5471bfa5-0ba1-439c-b208-7f1eef47ebe2', '4d2012b8-f333-4b7a-9cf4-a971a1fa768f']#033[00m Feb 1 04:59:01 localhost nova_compute[274317]: 2026-02-01 09:59:01.421 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:01 localhost openstack_network_exporter[239388]: ERROR 09:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:59:01 localhost openstack_network_exporter[239388]: Feb 1 04:59:01 localhost openstack_network_exporter[239388]: ERROR 09:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:59:01 localhost openstack_network_exporter[239388]: Feb 1 04:59:01 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:01.782 2 INFO neutron.agent.securitygroups_rpc [None req-1a8757de-ce31-4291-bd6c-41d255499299 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e205 e205: 6 total, 6 up, 6 in Feb 1 04:59:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "17417769-f267-43fd-88c0-c1785d065840", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 04:59:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:02 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' Feb 1 04:59:02 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta' Feb 1 04:59:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "17417769-f267-43fd-88c0-c1785d065840", "format": "json"}]: dispatch Feb 1 04:59:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 146 MiB data, 875 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.4 KiB/s wr, 46 op/s Feb 1 04:59:02 localhost nova_compute[274317]: 2026-02-01 09:59:02.530 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:02 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:02.555 2 INFO neutron.agent.securitygroups_rpc [None req-c9b67f75-2c82-4fb2-8eff-26d6d15e8cf1 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['4d2012b8-f333-4b7a-9cf4-a971a1fa768f']#033[00m Feb 1 04:59:02 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:02.631 2 INFO neutron.agent.securitygroups_rpc [None req-77c4ed81-7ddc-4083-9aca-2d68314f54e3 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:02 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:02.861 2 INFO neutron.agent.securitygroups_rpc [None req-648bce3d-1b69-4a9f-8926-09639fc82cde 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 09:59:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 04:59:04 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:04.068 2 INFO neutron.agent.securitygroups_rpc [None req-497c1bff-4722-4a35-9304-716d637751a5 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:04 localhost nova_compute[274317]: 2026-02-01 09:59:04.289 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v409: 177 pgs: 177 active+clean; 146 MiB data, 875 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 7.7 KiB/s wr, 55 op/s Feb 1 04:59:04 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:04.488 2 INFO neutron.agent.securitygroups_rpc [None req-91776191-3760-464d-b953-295169a6f779 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:04 localhost ovn_controller[152787]: 2026-02-01T09:59:04Z|00233|ovn_bfd|INFO|Disabled BFD on interface ovn-2186fb-0 Feb 1 04:59:04 localhost ovn_controller[152787]: 2026-02-01T09:59:04Z|00234|ovn_bfd|INFO|Disabled BFD on interface ovn-e1cc33-0 Feb 1 04:59:04 localhost ovn_controller[152787]: 2026-02-01T09:59:04Z|00235|ovn_bfd|INFO|Disabled BFD on interface ovn-45aa31-0 Feb 1 04:59:04 localhost nova_compute[274317]: 2026-02-01 09:59:04.612 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:04 localhost nova_compute[274317]: 2026-02-01 09:59:04.615 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:04 localhost nova_compute[274317]: 2026-02-01 09:59:04.628 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:04 localhost nova_compute[274317]: 2026-02-01 09:59:04.735 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:04 localhost dnsmasq[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/addn_hosts - 0 addresses Feb 1 04:59:04 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/host Feb 1 04:59:04 localhost dnsmasq-dhcp[310765]: read /var/lib/neutron/dhcp/c3e71f40-156c-4217-bedf-836f04a8f728/opts Feb 1 04:59:04 localhost systemd[1]: tmp-crun.tlDg4x.mount: Deactivated successfully. Feb 1 04:59:04 localhost podman[314803]: 2026-02-01 09:59:04.761673403 +0000 UTC m=+0.083600446 container kill 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 04:59:04 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:04.765 2 INFO neutron.agent.securitygroups_rpc [None req-95923de5-5aba-4d8b-a050-6896df644f34 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:59:04 localhost podman[314816]: 2026-02-01 09:59:04.88157865 +0000 UTC m=+0.094069662 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Feb 1 04:59:04 localhost podman[314816]: 2026-02-01 09:59:04.891426128 +0000 UTC m=+0.103917140 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:59:04 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:59:04 localhost nova_compute[274317]: 2026-02-01 09:59:04.938 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:04 localhost kernel: device tap0ff05a29-3c left promiscuous mode Feb 1 04:59:04 localhost ovn_controller[152787]: 2026-02-01T09:59:04Z|00236|binding|INFO|Releasing lport 0ff05a29-3cc7-4c1a-a005-225d700300ca from this chassis (sb_readonly=0) Feb 1 04:59:04 localhost ovn_controller[152787]: 2026-02-01T09:59:04Z|00237|binding|INFO|Setting lport 0ff05a29-3cc7-4c1a-a005-225d700300ca down in Southbound Feb 1 04:59:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:04.950 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-c3e71f40-156c-4217-bedf-836f04a8f728', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3e71f40-156c-4217-bedf-836f04a8f728', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff200d66c230435098f5a0489bf1e8f7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4bd8115-ffb2-4415-a799-f41a6c9021b2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0ff05a29-3cc7-4c1a-a005-225d700300ca) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:59:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:04.951 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 0ff05a29-3cc7-4c1a-a005-225d700300ca in datapath c3e71f40-156c-4217-bedf-836f04a8f728 unbound from our chassis#033[00m Feb 1 04:59:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:04.955 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c3e71f40-156c-4217-bedf-836f04a8f728, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:59:04 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:04.956 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[3b846c1f-4163-4205-951d-5a11f2fb6a28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:59:04 localhost nova_compute[274317]: 2026-02-01 09:59:04.963 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:04 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:04.990 2 INFO neutron.agent.securitygroups_rpc [None req-99fb79a2-24fb-465b-80f5-1d805114aafe 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "17417769-f267-43fd-88c0-c1785d065840", "snap_name": "b0497c4b-d9c3-43be-bbb4-ed8008c4b7c1", "format": "json"}]: dispatch Feb 1 04:59:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b0497c4b-d9c3-43be-bbb4-ed8008c4b7c1, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b0497c4b-d9c3-43be-bbb4-ed8008c4b7c1, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:05 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:05.940 2 INFO neutron.agent.securitygroups_rpc [None req-47fc05ae-3df2-436b-85c1-a738a10459e4 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:06 localhost nova_compute[274317]: 2026-02-01 09:59:06.309 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 146 MiB data, 875 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 7.2 KiB/s wr, 52 op/s Feb 1 04:59:06 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:06.686 2 INFO neutron.agent.securitygroups_rpc [None req-a0d41d31-bc4d-4dff-9a1f-c5fa8673a648 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:06 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e206 e206: 6 total, 6 up, 6 in Feb 1 04:59:06 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:06.996 2 INFO neutron.agent.securitygroups_rpc [None req-1e2106aa-eb56-4244-87ab-21e381223ca0 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['1156e221-0c08-4d6b-9b97-5a6e91528188']#033[00m Feb 1 04:59:07 localhost nova_compute[274317]: 2026-02-01 09:59:07.530 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:59:07 localhost podman[314845]: 2026-02-01 09:59:07.870065577 +0000 UTC m=+0.083032950 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:59:07 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:07.894 2 INFO neutron.agent.securitygroups_rpc [None req-29be5c8b-b62f-485c-998f-043fe218176b d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['4c513797-4919-4e80-9d08-c2b88dcc61a1']#033[00m Feb 1 04:59:07 localhost podman[314845]: 2026-02-01 09:59:07.903277782 +0000 UTC m=+0.116245205 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 04:59:07 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:59:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 146 MiB data, 875 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 8.3 KiB/s wr, 46 op/s Feb 1 04:59:08 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:08.395 2 INFO neutron.agent.securitygroups_rpc [None req-70d5b65b-a947-49dc-a1b2-2f95b637bb85 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['a0337e8b-7eb2-4444-b9bf-a19f28129233']#033[00m Feb 1 04:59:08 localhost dnsmasq[310765]: exiting on receipt of SIGTERM Feb 1 04:59:08 localhost podman[314885]: 2026-02-01 09:59:08.519788876 +0000 UTC m=+0.059235747 container kill 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 04:59:08 localhost systemd[1]: libpod-8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51.scope: Deactivated successfully. Feb 1 04:59:08 localhost podman[314897]: 2026-02-01 09:59:08.587762574 +0000 UTC m=+0.053229840 container died 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 1 04:59:08 localhost podman[314897]: 2026-02-01 09:59:08.619368428 +0000 UTC m=+0.084835654 container cleanup 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:59:08 localhost systemd[1]: libpod-conmon-8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51.scope: Deactivated successfully. Feb 1 04:59:08 localhost podman[314899]: 2026-02-01 09:59:08.671333818 +0000 UTC m=+0.131218250 container remove 8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e71f40-156c-4217-bedf-836f04a8f728, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 04:59:08 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:08.702 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:59:08 localhost systemd[1]: var-lib-containers-storage-overlay-6c22aff628e6bc84ba432acd1a0ec47a0f890d608bcc6d6b65fb5e1bf052ca32-merged.mount: Deactivated successfully. Feb 1 04:59:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e4559d8b52638c151427a0bb67f56a8e455aa18a53f27cb954ab69f4989ce51-userdata-shm.mount: Deactivated successfully. Feb 1 04:59:08 localhost systemd[1]: run-netns-qdhcp\x2dc3e71f40\x2d156c\x2d4217\x2dbedf\x2d836f04a8f728.mount: Deactivated successfully. Feb 1 04:59:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "17417769-f267-43fd-88c0-c1785d065840", "snap_name": "768c0cef-f962-4eff-a24c-02f86df937ba", "format": "json"}]: dispatch Feb 1 04:59:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:768c0cef-f962-4eff-a24c-02f86df937ba, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:768c0cef-f962-4eff-a24c-02f86df937ba, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:09 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:09.388 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:59:09 localhost nova_compute[274317]: 2026-02-01 09:59:09.955 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 146 MiB data, 875 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 7.3 KiB/s wr, 40 op/s Feb 1 04:59:10 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:10.774 2 INFO neutron.agent.securitygroups_rpc [None req-db14a021-48cb-407b-8edd-e94cee0b2d02 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['882cde13-f256-402b-ab30-a0fc50e38425', 'd59fe500-82e7-40fc-885e-589a886bd9ec', '4c513797-4919-4e80-9d08-c2b88dcc61a1']#033[00m Feb 1 04:59:11 localhost nova_compute[274317]: 2026-02-01 09:59:11.356 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:11 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:11.389 2 INFO neutron.agent.securitygroups_rpc [None req-3849eb70-2e22-4d07-ad6e-472e2f67b4ca 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['3d036e8e-d2c8-4e3a-9dbf-e906123b5f25']#033[00m Feb 1 04:59:11 localhost dnsmasq[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/addn_hosts - 0 addresses Feb 1 04:59:11 localhost dnsmasq-dhcp[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/host Feb 1 04:59:11 localhost dnsmasq-dhcp[314345]: read /var/lib/neutron/dhcp/efcb439d-5008-49b3-8d74-fc95eb1e0a3c/opts Feb 1 04:59:11 localhost podman[314940]: 2026-02-01 09:59:11.605337188 +0000 UTC m=+0.059446933 container kill b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 1 04:59:12 localhost nova_compute[274317]: 2026-02-01 09:59:12.068 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:12 localhost ovn_controller[152787]: 2026-02-01T09:59:12Z|00238|binding|INFO|Releasing lport 5b4ba3e8-5f00-4316-a86d-1c06057933fc from this chassis (sb_readonly=0) Feb 1 04:59:12 localhost kernel: device tap5b4ba3e8-5f left promiscuous mode Feb 1 04:59:12 localhost ovn_controller[152787]: 2026-02-01T09:59:12Z|00239|binding|INFO|Setting lport 5b4ba3e8-5f00-4316-a86d-1c06057933fc down in Southbound Feb 1 04:59:12 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:12.080 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-efcb439d-5008-49b3-8d74-fc95eb1e0a3c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efcb439d-5008-49b3-8d74-fc95eb1e0a3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e1ea1a33e554968ba8ebaf6753c9c5d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33e912d1-6794-400a-b37c-704b8b53759d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5b4ba3e8-5f00-4316-a86d-1c06057933fc) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:59:12 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:12.082 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 5b4ba3e8-5f00-4316-a86d-1c06057933fc in datapath efcb439d-5008-49b3-8d74-fc95eb1e0a3c unbound from our chassis#033[00m Feb 1 04:59:12 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:12.085 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efcb439d-5008-49b3-8d74-fc95eb1e0a3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:59:12 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:12.086 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[29a3706a-06e7-47ca-9577-d8e360013d1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:59:12 localhost nova_compute[274317]: 2026-02-01 09:59:12.090 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:12 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:12.168 2 INFO neutron.agent.securitygroups_rpc [None req-5ba6c6ad-3063-4fac-a109-b3e261c4ab28 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['3d036e8e-d2c8-4e3a-9dbf-e906123b5f25']#033[00m Feb 1 04:59:12 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:12.195 2 INFO neutron.agent.securitygroups_rpc [None req-b3757230-46f4-432e-b101-9a3e15d9fc63 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['882cde13-f256-402b-ab30-a0fc50e38425', 'd59fe500-82e7-40fc-885e-589a886bd9ec']#033[00m Feb 1 04:59:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 146 MiB data, 875 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 6.1 KiB/s wr, 33 op/s Feb 1 04:59:12 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "17417769-f267-43fd-88c0-c1785d065840", "snap_name": "768c0cef-f962-4eff-a24c-02f86df937ba_da6bfb97-c826-4894-a29f-6e5051fe3817", "force": true, "format": "json"}]: dispatch Feb 1 04:59:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:768c0cef-f962-4eff-a24c-02f86df937ba_da6bfb97-c826-4894-a29f-6e5051fe3817, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:12 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' Feb 1 04:59:12 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta' Feb 1 04:59:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:768c0cef-f962-4eff-a24c-02f86df937ba_da6bfb97-c826-4894-a29f-6e5051fe3817, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:12 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "17417769-f267-43fd-88c0-c1785d065840", "snap_name": "768c0cef-f962-4eff-a24c-02f86df937ba", "force": true, "format": "json"}]: dispatch Feb 1 04:59:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:768c0cef-f962-4eff-a24c-02f86df937ba, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:12 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' Feb 1 04:59:12 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta' Feb 1 04:59:12 localhost nova_compute[274317]: 2026-02-01 09:59:12.534 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:768c0cef-f962-4eff-a24c-02f86df937ba, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:13 localhost dnsmasq[314345]: exiting on receipt of SIGTERM Feb 1 04:59:13 localhost podman[314978]: 2026-02-01 09:59:13.119978663 +0000 UTC m=+0.063757858 container kill b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:59:13 localhost systemd[1]: libpod-b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8.scope: Deactivated successfully. Feb 1 04:59:13 localhost podman[314992]: 2026-02-01 09:59:13.196920961 +0000 UTC m=+0.060865277 container died b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 1 04:59:13 localhost systemd[1]: tmp-crun.iU4DMF.mount: Deactivated successfully. Feb 1 04:59:13 localhost podman[314992]: 2026-02-01 09:59:13.240642294 +0000 UTC m=+0.104586580 container cleanup b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 04:59:13 localhost systemd[1]: libpod-conmon-b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8.scope: Deactivated successfully. Feb 1 04:59:13 localhost podman[314994]: 2026-02-01 09:59:13.328037287 +0000 UTC m=+0.186154002 container remove b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efcb439d-5008-49b3-8d74-fc95eb1e0a3c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:59:13 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:13.389 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:59:13 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:13.861 259225 INFO neutron.agent.linux.ip_lib [None req-a957dd9d-aaeb-4fbf-bd60-2d01c8d33015 - - - - - -] Device tap8e0745c9-47 cannot be used as it has no MAC address#033[00m Feb 1 04:59:13 localhost nova_compute[274317]: 2026-02-01 09:59:13.909 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:13 localhost kernel: device tap8e0745c9-47 entered promiscuous mode Feb 1 04:59:13 localhost NetworkManager[5972]: [1769939953.9192] manager: (tap8e0745c9-47): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Feb 1 04:59:13 localhost ovn_controller[152787]: 2026-02-01T09:59:13Z|00240|binding|INFO|Claiming lport 8e0745c9-4755-4917-844d-acaa5ec19a3f for this chassis. Feb 1 04:59:13 localhost ovn_controller[152787]: 2026-02-01T09:59:13Z|00241|binding|INFO|8e0745c9-4755-4917-844d-acaa5ec19a3f: Claiming unknown Feb 1 04:59:13 localhost nova_compute[274317]: 2026-02-01 09:59:13.921 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:13 localhost systemd-udevd[315031]: Network interface NamePolicy= disabled on kernel command line. Feb 1 04:59:13 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:13.930 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-db4796fe-8da5-42e5-beb8-ef32cfa5ba89', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db4796fe-8da5-42e5-beb8-ef32cfa5ba89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c23ed2a9641444eeac6ffb9689135326', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6f947c1-544d-485e-899d-5026404fa905, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8e0745c9-4755-4917-844d-acaa5ec19a3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:59:13 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:13.933 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 8e0745c9-4755-4917-844d-acaa5ec19a3f in datapath db4796fe-8da5-42e5-beb8-ef32cfa5ba89 bound to our chassis#033[00m Feb 1 04:59:13 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:13.936 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db4796fe-8da5-42e5-beb8-ef32cfa5ba89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 04:59:13 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:13.938 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[95ec30c6-f68d-4a46-b169-a13a6cf507f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:59:13 localhost journal[224955]: ethtool ioctl error on tap8e0745c9-47: No such device Feb 1 04:59:13 localhost ovn_controller[152787]: 2026-02-01T09:59:13Z|00242|binding|INFO|Setting lport 8e0745c9-4755-4917-844d-acaa5ec19a3f ovn-installed in OVS Feb 1 04:59:13 localhost ovn_controller[152787]: 2026-02-01T09:59:13Z|00243|binding|INFO|Setting lport 8e0745c9-4755-4917-844d-acaa5ec19a3f up in Southbound Feb 1 04:59:13 localhost journal[224955]: ethtool ioctl error on tap8e0745c9-47: No such device Feb 1 04:59:13 localhost nova_compute[274317]: 2026-02-01 09:59:13.967 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:13 localhost journal[224955]: ethtool ioctl error on tap8e0745c9-47: No such device Feb 1 04:59:13 localhost journal[224955]: ethtool ioctl error on tap8e0745c9-47: No such device Feb 1 04:59:13 localhost journal[224955]: ethtool ioctl error on tap8e0745c9-47: No such device Feb 1 04:59:13 localhost journal[224955]: ethtool ioctl error on tap8e0745c9-47: No such device Feb 1 04:59:13 localhost journal[224955]: ethtool ioctl error on tap8e0745c9-47: No such device Feb 1 04:59:14 localhost journal[224955]: ethtool ioctl error on tap8e0745c9-47: No such device Feb 1 04:59:14 localhost nova_compute[274317]: 2026-02-01 09:59:14.017 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:14 localhost nova_compute[274317]: 2026-02-01 09:59:14.042 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:14 localhost systemd[1]: tmp-crun.6duVYc.mount: Deactivated successfully. Feb 1 04:59:14 localhost systemd[1]: var-lib-containers-storage-overlay-3d531bacba67f6f62eddd0d069e5b217cc14d9a3f2a39da03970900ebf0f5bfb-merged.mount: Deactivated successfully. Feb 1 04:59:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b120198883d37bfddc45936513c726a48c3fad611c725f95d1e4900378b085d8-userdata-shm.mount: Deactivated successfully. Feb 1 04:59:14 localhost systemd[1]: run-netns-qdhcp\x2defcb439d\x2d5008\x2d49b3\x2d8d74\x2dfc95eb1e0a3c.mount: Deactivated successfully. Feb 1 04:59:14 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:14.177 259225 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:59:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 146 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 4.7 KiB/s wr, 1 op/s Feb 1 04:59:14 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:14.500 2 INFO neutron.agent.securitygroups_rpc [None req-5e11ff1d-8674-48ea-81ee-b80b81afee00 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['7c2cef09-1439-45eb-af5a-e316fd7a5ca9']#033[00m Feb 1 04:59:14 localhost nova_compute[274317]: 2026-02-01 09:59:14.726 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:14 localhost podman[315102]: Feb 1 04:59:14 localhost podman[315102]: 2026-02-01 09:59:14.983957884 +0000 UTC m=+0.093605107 container create 0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-db4796fe-8da5-42e5-beb8-ef32cfa5ba89, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 1 04:59:15 localhost systemd[1]: Started libpod-conmon-0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21.scope. Feb 1 04:59:15 localhost podman[315102]: 2026-02-01 09:59:14.938921411 +0000 UTC m=+0.048568694 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 04:59:15 localhost systemd[1]: Started libcrun container. Feb 1 04:59:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 1 04:59:15 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2915970543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 1 04:59:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2346ecdc71b972c7c9380ff6b9d8627db59885136140e3209e42239b23da8a64/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 04:59:15 localhost podman[315102]: 2026-02-01 09:59:15.065343771 +0000 UTC m=+0.174991024 container init 0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-db4796fe-8da5-42e5-beb8-ef32cfa5ba89, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 1 04:59:15 localhost podman[315102]: 2026-02-01 09:59:15.074575419 +0000 UTC m=+0.184222642 container start 0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-db4796fe-8da5-42e5-beb8-ef32cfa5ba89, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 1 04:59:15 localhost dnsmasq[315121]: started, version 2.85 cachesize 150 Feb 1 04:59:15 localhost dnsmasq[315121]: DNS service limited to local subnets Feb 1 04:59:15 localhost dnsmasq[315121]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 04:59:15 localhost dnsmasq[315121]: warning: no upstream servers configured Feb 1 04:59:15 localhost dnsmasq-dhcp[315121]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 04:59:15 localhost dnsmasq[315121]: read /var/lib/neutron/dhcp/db4796fe-8da5-42e5-beb8-ef32cfa5ba89/addn_hosts - 0 addresses Feb 1 04:59:15 localhost dnsmasq-dhcp[315121]: read /var/lib/neutron/dhcp/db4796fe-8da5-42e5-beb8-ef32cfa5ba89/host Feb 1 04:59:15 localhost dnsmasq-dhcp[315121]: read /var/lib/neutron/dhcp/db4796fe-8da5-42e5-beb8-ef32cfa5ba89/opts Feb 1 04:59:15 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:15.120 2 INFO neutron.agent.securitygroups_rpc [None req-d9b77fb6-ebc7-41c7-a5b0-8d63738e2d77 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['7c2cef09-1439-45eb-af5a-e316fd7a5ca9']#033[00m Feb 1 04:59:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:15 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:15.234 259225 INFO neutron.agent.dhcp.agent [None req-c4afd2b4-de26-4382-a373-7a98d98ac865 - - - - - -] DHCP configuration for ports {'42220e7b-b7fd-49ee-9ded-1abeca61bab9'} is completed#033[00m Feb 1 04:59:15 localhost sshd[315122]: main: sshd: ssh-rsa algorithm is disabled Feb 1 04:59:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "17417769-f267-43fd-88c0-c1785d065840", "snap_name": "b0497c4b-d9c3-43be-bbb4-ed8008c4b7c1_2507b9ff-e305-4644-a480-f9c1a78c9d40", "force": true, "format": "json"}]: dispatch Feb 1 04:59:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0497c4b-d9c3-43be-bbb4-ed8008c4b7c1_2507b9ff-e305-4644-a480-f9c1a78c9d40, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' Feb 1 04:59:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta' Feb 1 04:59:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0497c4b-d9c3-43be-bbb4-ed8008c4b7c1_2507b9ff-e305-4644-a480-f9c1a78c9d40, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "17417769-f267-43fd-88c0-c1785d065840", "snap_name": "b0497c4b-d9c3-43be-bbb4-ed8008c4b7c1", "force": true, "format": "json"}]: dispatch Feb 1 04:59:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0497c4b-d9c3-43be-bbb4-ed8008c4b7c1, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' Feb 1 04:59:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta.tmp' to config b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840/.meta' Feb 1 04:59:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0497c4b-d9c3-43be-bbb4-ed8008c4b7c1, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e207 e207: 6 total, 6 up, 6 in Feb 1 04:59:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 146 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 4.9 KiB/s wr, 1 op/s Feb 1 04:59:16 localhost nova_compute[274317]: 2026-02-01 09:59:16.359 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e208 e208: 6 total, 6 up, 6 in Feb 1 04:59:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:17.001 2 INFO neutron.agent.securitygroups_rpc [None req-7f0248f7-eb48-44f1-afa4-6c21da99fef7 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['95f04fd7-2a7a-45df-b586-d0771f0c51c2']#033[00m Feb 1 04:59:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:17.113 2 INFO neutron.agent.securitygroups_rpc [None req-8aeaa6d8-5ca7-4c3a-b8a2-e5b99f7af336 d74a270228ef43bb9eebd5b8b203e133 fea4c3ac6fd14aee8b0de1bad5f8673a - - default default] Security group member updated ['090b75a3-5cce-4012-8bbe-4b851ef442c2']#033[00m Feb 1 04:59:17 localhost systemd[1]: tmp-crun.VXCC8K.mount: Deactivated successfully. Feb 1 04:59:17 localhost dnsmasq[315121]: read /var/lib/neutron/dhcp/db4796fe-8da5-42e5-beb8-ef32cfa5ba89/addn_hosts - 0 addresses Feb 1 04:59:17 localhost dnsmasq-dhcp[315121]: read /var/lib/neutron/dhcp/db4796fe-8da5-42e5-beb8-ef32cfa5ba89/host Feb 1 04:59:17 localhost dnsmasq-dhcp[315121]: read /var/lib/neutron/dhcp/db4796fe-8da5-42e5-beb8-ef32cfa5ba89/opts Feb 1 04:59:17 localhost podman[315141]: 2026-02-01 09:59:17.236248839 +0000 UTC m=+0.077377902 container kill 0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-db4796fe-8da5-42e5-beb8-ef32cfa5ba89, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 04:59:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:17.319 2 INFO neutron.agent.securitygroups_rpc [None req-bfce6195-4572-4858-b958-8ae0eaa1dc23 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['95f04fd7-2a7a-45df-b586-d0771f0c51c2']#033[00m Feb 1 04:59:17 localhost ovn_controller[152787]: 2026-02-01T09:59:17Z|00244|binding|INFO|Removing iface tap8e0745c9-47 ovn-installed in OVS Feb 1 04:59:17 localhost ovn_controller[152787]: 2026-02-01T09:59:17Z|00245|binding|INFO|Removing lport 8e0745c9-4755-4917-844d-acaa5ec19a3f ovn-installed in OVS Feb 1 04:59:17 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:17.448 158655 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 6fb0cad9-71f8-424f-b704-4ac7422d6ca8 with type ""#033[00m Feb 1 04:59:17 localhost nova_compute[274317]: 2026-02-01 09:59:17.451 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:17 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:17.452 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-db4796fe-8da5-42e5-beb8-ef32cfa5ba89', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db4796fe-8da5-42e5-beb8-ef32cfa5ba89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c23ed2a9641444eeac6ffb9689135326', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6f947c1-544d-485e-899d-5026404fa905, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8e0745c9-4755-4917-844d-acaa5ec19a3f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:59:17 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:17.455 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 8e0745c9-4755-4917-844d-acaa5ec19a3f in datapath db4796fe-8da5-42e5-beb8-ef32cfa5ba89 unbound from our chassis#033[00m Feb 1 04:59:17 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:17.459 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db4796fe-8da5-42e5-beb8-ef32cfa5ba89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:59:17 localhost nova_compute[274317]: 2026-02-01 09:59:17.459 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:17 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:17.460 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[b41210c7-2255-4c1b-b341-a2968e7e9a1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:59:17 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:17.501 259225 INFO neutron.agent.dhcp.agent [None req-50ebb1dc-6507-4b64-b277-cd3328a48a1c - - - - - -] DHCP configuration for ports {'42220e7b-b7fd-49ee-9ded-1abeca61bab9', '8e0745c9-4755-4917-844d-acaa5ec19a3f'} is completed#033[00m Feb 1 04:59:17 localhost nova_compute[274317]: 2026-02-01 09:59:17.536 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:17 localhost podman[315177]: 2026-02-01 09:59:17.609684888 +0000 UTC m=+0.062534821 container kill 0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-db4796fe-8da5-42e5-beb8-ef32cfa5ba89, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 1 04:59:17 localhost dnsmasq[315121]: exiting on receipt of SIGTERM Feb 1 04:59:17 localhost systemd[1]: libpod-0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21.scope: Deactivated successfully. Feb 1 04:59:17 localhost podman[315189]: 2026-02-01 09:59:17.689084382 +0000 UTC m=+0.064447300 container died 0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-db4796fe-8da5-42e5-beb8-ef32cfa5ba89, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Feb 1 04:59:17 localhost podman[315189]: 2026-02-01 09:59:17.717766146 +0000 UTC m=+0.093128994 container cleanup 0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-db4796fe-8da5-42e5-beb8-ef32cfa5ba89, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:59:17 localhost systemd[1]: libpod-conmon-0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21.scope: Deactivated successfully. Feb 1 04:59:17 localhost podman[315191]: 2026-02-01 09:59:17.76796124 +0000 UTC m=+0.135416361 container remove 0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-db4796fe-8da5-42e5-beb8-ef32cfa5ba89, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 04:59:17 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:17.779 2 INFO neutron.agent.securitygroups_rpc [None req-a584f8be-9a76-404e-9854-afe457f9fc2e 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['95f04fd7-2a7a-45df-b586-d0771f0c51c2']#033[00m Feb 1 04:59:17 localhost kernel: device tap8e0745c9-47 left promiscuous mode Feb 1 04:59:17 localhost nova_compute[274317]: 2026-02-01 09:59:17.818 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:17 localhost nova_compute[274317]: 2026-02-01 09:59:17.831 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:17 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:17.855 259225 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Feb 1 04:59:18 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:18.012 259225 INFO neutron.agent.dhcp.agent [None req-7dd38096-b9ec-4a6c-acff-8934fe3456bc - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 1 04:59:18 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:18.014 259225 INFO neutron.agent.dhcp.agent [-] Starting network db4796fe-8da5-42e5-beb8-ef32cfa5ba89 dhcp configuration#033[00m Feb 1 04:59:18 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:18.014 259225 INFO neutron.agent.dhcp.agent [-] Finished network db4796fe-8da5-42e5-beb8-ef32cfa5ba89 dhcp configuration#033[00m Feb 1 04:59:18 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:18.014 259225 INFO neutron.agent.dhcp.agent [None req-7dd38096-b9ec-4a6c-acff-8934fe3456bc - - - - - -] Synchronizing state complete#033[00m Feb 1 04:59:18 localhost nova_compute[274317]: 2026-02-01 09:59:18.086 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:18 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:18.120 259225 INFO neutron.agent.dhcp.agent [None req-c89990d5-78d9-464e-a59a-c794cde743d7 - - - - - -] DHCP configuration for ports {'42220e7b-b7fd-49ee-9ded-1abeca61bab9'} is completed#033[00m Feb 1 04:59:18 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:18.201 2 INFO neutron.agent.securitygroups_rpc [None req-e84e7596-d223-4f47-8254-074f89f65fdc 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['95f04fd7-2a7a-45df-b586-d0771f0c51c2']#033[00m Feb 1 04:59:18 localhost systemd[1]: var-lib-containers-storage-overlay-2346ecdc71b972c7c9380ff6b9d8627db59885136140e3209e42239b23da8a64-merged.mount: Deactivated successfully. Feb 1 04:59:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b43c1664215a7a3f7c6da320082b787ce24961f23469394a2a91fdd1c2d7e21-userdata-shm.mount: Deactivated successfully. Feb 1 04:59:18 localhost systemd[1]: run-netns-qdhcp\x2ddb4796fe\x2d8da5\x2d42e5\x2dbeb8\x2def32cfa5ba89.mount: Deactivated successfully. Feb 1 04:59:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 146 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 23 KiB/s wr, 29 op/s Feb 1 04:59:18 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:18.532 2 INFO neutron.agent.securitygroups_rpc [None req-e742ba15-30cb-436e-a4c2-2391ce2e0670 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['95f04fd7-2a7a-45df-b586-d0771f0c51c2']#033[00m Feb 1 04:59:18 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:18.888 2 INFO neutron.agent.securitygroups_rpc [None req-46338e73-87bf-491f-8ed0-9b7015f713f3 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['95f04fd7-2a7a-45df-b586-d0771f0c51c2']#033[00m Feb 1 04:59:18 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e209 e209: 6 total, 6 up, 6 in Feb 1 04:59:19 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "17417769-f267-43fd-88c0-c1785d065840", "format": "json"}]: dispatch Feb 1 04:59:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:17417769-f267-43fd-88c0-c1785d065840, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:59:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:17417769-f267-43fd-88c0-c1785d065840, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:59:19 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '17417769-f267-43fd-88c0-c1785d065840' of type subvolume Feb 1 04:59:19 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:59:19.045+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '17417769-f267-43fd-88c0-c1785d065840' of type subvolume Feb 1 04:59:19 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "17417769-f267-43fd-88c0-c1785d065840", "force": true, "format": "json"}]: dispatch Feb 1 04:59:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:19 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/17417769-f267-43fd-88c0-c1785d065840'' moved to trashcan Feb 1 04:59:19 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:59:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:17417769-f267-43fd-88c0-c1785d065840, vol_name:cephfs) < "" Feb 1 04:59:19 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:19.742 2 INFO neutron.agent.securitygroups_rpc [None req-7124cdef-fc68-40e6-a617-ec7afce96cf9 3e3e53f98794468b9dd11f09fac77776 0721038c814c404d9f2aa1ec859c5601 - - default default] Security group rule updated ['05a59877-b29c-4804-965e-2274924179d2']#033[00m Feb 1 04:59:19 localhost podman[315239]: 2026-02-01 09:59:19.824608446 +0000 UTC m=+0.058082320 container kill 6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 1 04:59:19 localhost dnsmasq[312654]: exiting on receipt of SIGTERM Feb 1 04:59:19 localhost systemd[1]: libpod-6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b.scope: Deactivated successfully. Feb 1 04:59:19 localhost ovn_controller[152787]: 2026-02-01T09:59:19Z|00246|binding|INFO|Removing iface tap3a91aa3a-fb ovn-installed in OVS Feb 1 04:59:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:19.840 158655 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 6b9acee3-bb0b-4711-8e27-ab74e9f04414 with type ""#033[00m Feb 1 04:59:19 localhost ovn_controller[152787]: 2026-02-01T09:59:19Z|00247|binding|INFO|Removing lport 3a91aa3a-fb9f-4945-91e3-85f0d278b0b5 ovn-installed in OVS Feb 1 04:59:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:19.842 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c23ed2a9641444eeac6ffb9689135326', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28f2370b-4aa7-434f-90cb-05cc01bed2bb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3a91aa3a-fb9f-4945-91e3-85f0d278b0b5) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:59:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:19.846 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 3a91aa3a-fb9f-4945-91e3-85f0d278b0b5 in datapath 4da937bf-f66e-48ce-bf66-f7d3d9f7bc52 unbound from our chassis#033[00m Feb 1 04:59:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:19.849 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4da937bf-f66e-48ce-bf66-f7d3d9f7bc52, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 04:59:19 localhost nova_compute[274317]: 2026-02-01 09:59:19.850 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:19 localhost nova_compute[274317]: 2026-02-01 09:59:19.851 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:19 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:19.850 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[79ddea18-c0e6-45eb-90fd-69e77e9cba00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 04:59:19 localhost podman[315252]: 2026-02-01 09:59:19.885204055 +0000 UTC m=+0.047491191 container died 6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:59:19 localhost podman[315252]: 2026-02-01 09:59:19.968688087 +0000 UTC m=+0.130975143 container cleanup 6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 04:59:19 localhost systemd[1]: libpod-conmon-6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b.scope: Deactivated successfully. Feb 1 04:59:19 localhost podman[315254]: 2026-02-01 09:59:19.992148738 +0000 UTC m=+0.146075483 container remove 6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4da937bf-f66e-48ce-bf66-f7d3d9f7bc52, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:59:20 localhost kernel: device tap3a91aa3a-fb left promiscuous mode Feb 1 04:59:20 localhost nova_compute[274317]: 2026-02-01 09:59:20.006 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e210 e210: 6 total, 6 up, 6 in Feb 1 04:59:20 localhost nova_compute[274317]: 2026-02-01 09:59:20.023 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:20 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:20.054 259225 INFO neutron.agent.dhcp.agent [None req-5dcb9013-3153-4a15-b221-e281c05a548d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:59:20 localhost neutron_dhcp_agent[259221]: 2026-02-01 09:59:20.055 259225 INFO neutron.agent.dhcp.agent [None req-5dcb9013-3153-4a15-b221-e281c05a548d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 04:59:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:20 localhost nova_compute[274317]: 2026-02-01 09:59:20.242 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 146 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 34 KiB/s wr, 52 op/s Feb 1 04:59:20 localhost systemd[1]: var-lib-containers-storage-overlay-4383ecd4dd19cf8e2c67a94d44ef1d65db9bfee3a02ca20cd31b89d8b95a13e4-merged.mount: Deactivated successfully. Feb 1 04:59:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6556794d4e5bf704e06bf800eff0a39e5cd58b47cd3e5cb402c8509b64928f0b-userdata-shm.mount: Deactivated successfully. Feb 1 04:59:20 localhost systemd[1]: run-netns-qdhcp\x2d4da937bf\x2df66e\x2d48ce\x2dbf66\x2df7d3d9f7bc52.mount: Deactivated successfully. Feb 1 04:59:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e211 e211: 6 total, 6 up, 6 in Feb 1 04:59:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_09:59:21 Feb 1 04:59:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 04:59:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 04:59:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['backups', 'volumes', 'manila_data', 'vms', '.mgr', 'manila_metadata', 'images'] Feb 1 04:59:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 04:59:21 localhost nova_compute[274317]: 2026-02-01 09:59:21.410 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003266113553880514 quantized to 32 (current 32) Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 04:59:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 3.107987855946399e-05 of space, bias 4.0, pg target 0.024739583333333332 quantized to 16 (current 16) Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:59:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 04:59:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e212 e212: 6 total, 6 up, 6 in Feb 1 04:59:21 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/.meta.tmp' Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/.meta.tmp' to config b'/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/.meta' Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:21 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "format": "json"}]: dispatch Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 146 MiB data, 876 MiB used, 41 GiB / 42 GiB avail Feb 1 04:59:22 localhost nova_compute[274317]: 2026-02-01 09:59:22.538 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:23 localhost ceph-mgr[278126]: [devicehealth INFO root] Check health Feb 1 04:59:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 146 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 109 KiB/s rd, 25 KiB/s wr, 152 op/s Feb 1 04:59:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:59:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:59:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:59:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:59:24 localhost podman[315284]: 2026-02-01 09:59:24.86069793 +0000 UTC m=+0.073231653 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127) Feb 1 04:59:24 localhost systemd[1]: tmp-crun.yBFP0U.mount: Deactivated successfully. Feb 1 04:59:24 localhost podman[315282]: 2026-02-01 09:59:24.918699598 +0000 UTC m=+0.132226823 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7) Feb 1 04:59:24 localhost podman[315282]: 2026-02-01 09:59:24.934617993 +0000 UTC m=+0.148145188 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 04:59:24 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:59:24 localhost podman[315284]: 2026-02-01 09:59:24.972059191 +0000 UTC m=+0.184592934 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 04:59:24 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:59:25 localhost podman[315283]: 2026-02-01 09:59:25.022027218 +0000 UTC m=+0.232046313 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:59:25 localhost podman[315283]: 2026-02-01 09:59:25.027268091 +0000 UTC m=+0.237287176 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Feb 1 04:59:25 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:59:25 localhost podman[315285]: 2026-02-01 09:59:25.075086042 +0000 UTC m=+0.280812053 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 04:59:25 localhost podman[315285]: 2026-02-01 09:59:25.083275326 +0000 UTC m=+0.289001267 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:59:25 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:59:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "auth_id": "eve49", "tenant_id": "9d23e4ae23d44fac9f67906e518759ed", "access_level": "rw", "format": "json"}]: dispatch Feb 1 04:59:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, tenant_id:9d23e4ae23d44fac9f67906e518759ed, vol_name:cephfs) < "" Feb 1 04:59:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Feb 1 04:59:25 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 1 04:59:25 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID eve49 with tenant 9d23e4ae23d44fac9f67906e518759ed Feb 1 04:59:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} v 0) Feb 1 04:59:25 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} : dispatch Feb 1 04:59:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, tenant_id:9d23e4ae23d44fac9f67906e518759ed, vol_name:cephfs) < "" Feb 1 04:59:26 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 1 04:59:26 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} : dispatch Feb 1 04:59:26 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} : dispatch Feb 1 04:59:26 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"}]': finished Feb 1 04:59:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 146 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 93 KiB/s rd, 22 KiB/s wr, 129 op/s Feb 1 04:59:26 localhost nova_compute[274317]: 2026-02-01 09:59:26.448 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:26 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "96e32855-572c-434b-9f41-cf83f652dd08", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 04:59:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:96e32855-572c-434b-9f41-cf83f652dd08, vol_name:cephfs) < "" Feb 1 04:59:26 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/96e32855-572c-434b-9f41-cf83f652dd08/.meta.tmp' Feb 1 04:59:26 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/96e32855-572c-434b-9f41-cf83f652dd08/.meta.tmp' to config b'/volumes/_nogroup/96e32855-572c-434b-9f41-cf83f652dd08/.meta' Feb 1 04:59:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:96e32855-572c-434b-9f41-cf83f652dd08, vol_name:cephfs) < "" Feb 1 04:59:26 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "96e32855-572c-434b-9f41-cf83f652dd08", "format": "json"}]: dispatch Feb 1 04:59:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:96e32855-572c-434b-9f41-cf83f652dd08, vol_name:cephfs) < "" Feb 1 04:59:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:96e32855-572c-434b-9f41-cf83f652dd08, vol_name:cephfs) < "" Feb 1 04:59:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e213 e213: 6 total, 6 up, 6 in Feb 1 04:59:27 localhost nova_compute[274317]: 2026-02-01 09:59:27.542 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:27 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:27.938 2 INFO neutron.agent.securitygroups_rpc [None req-1f1f07c2-7f26-4f08-b340-54cc58b1fa0f ce67f2e1bfb142d8acccf95caf1fd7af ab1e856df66342919053583b6afafe11 - - default default] Security group member updated ['0924a62e-9fd4-48bb-ad08-68af324d32a1']#033[00m Feb 1 04:59:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e214 e214: 6 total, 6 up, 6 in Feb 1 04:59:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 146 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 129 KiB/s rd, 41 KiB/s wr, 185 op/s Feb 1 04:59:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "auth_id": "eve48", "tenant_id": "9d23e4ae23d44fac9f67906e518759ed", "access_level": "rw", "format": "json"}]: dispatch Feb 1 04:59:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, tenant_id:9d23e4ae23d44fac9f67906e518759ed, vol_name:cephfs) < "" Feb 1 04:59:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Feb 1 04:59:28 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 1 04:59:28 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID eve48 with tenant 9d23e4ae23d44fac9f67906e518759ed Feb 1 04:59:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} v 0) Feb 1 04:59:28 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} : dispatch Feb 1 04:59:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, tenant_id:9d23e4ae23d44fac9f67906e518759ed, vol_name:cephfs) < "" Feb 1 04:59:29 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 1 04:59:29 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} : dispatch Feb 1 04:59:29 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} : dispatch Feb 1 04:59:29 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"}]': finished Feb 1 04:59:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e215 e215: 6 total, 6 up, 6 in Feb 1 04:59:29 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Feb 1 04:59:30 localhost podman[236852]: time="2026-02-01T09:59:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 04:59:30 localhost podman[236852]: @ - - [01/Feb/2026:09:59:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 04:59:30 localhost podman[236852]: @ - - [01/Feb/2026:09:59:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18302 "" "Go-http-client/1.1" Feb 1 04:59:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e216 e216: 6 total, 6 up, 6 in Feb 1 04:59:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 146 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 32 KiB/s wr, 98 op/s Feb 1 04:59:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e217 e217: 6 total, 6 up, 6 in Feb 1 04:59:31 localhost nova_compute[274317]: 2026-02-01 09:59:31.452 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:31 localhost openstack_network_exporter[239388]: ERROR 09:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 04:59:31 localhost openstack_network_exporter[239388]: Feb 1 04:59:31 localhost openstack_network_exporter[239388]: ERROR 09:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 04:59:31 localhost openstack_network_exporter[239388]: Feb 1 04:59:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e218 e218: 6 total, 6 up, 6 in Feb 1 04:59:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 146 MiB data, 885 MiB used, 41 GiB / 42 GiB avail Feb 1 04:59:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:59:32 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2555453464' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:59:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:59:32 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2555453464' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:59:32 localhost nova_compute[274317]: 2026-02-01 09:59:32.543 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:59:32 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2037329309' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:59:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:59:32 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2037329309' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:59:32 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "auth_id": "eve48", "format": "json"}]: dispatch Feb 1 04:59:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Feb 1 04:59:32 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 1 04:59:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Feb 1 04:59:32 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Feb 1 04:59:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "auth_id": "eve48", "format": "json"}]: dispatch Feb 1 04:59:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b Feb 1 04:59:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 04:59:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:33 localhost neutron_sriov_agent[252054]: 2026-02-01 09:59:33.151 2 INFO neutron.agent.securitygroups_rpc [None req-842f7c09-bc9b-4f83-bb5e-50b631488f24 ce67f2e1bfb142d8acccf95caf1fd7af ab1e856df66342919053583b6afafe11 - - default default] Security group member updated ['0924a62e-9fd4-48bb-ad08-68af324d32a1']#033[00m Feb 1 04:59:33 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Feb 1 04:59:33 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 1 04:59:33 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Feb 1 04:59:33 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Feb 1 04:59:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "96e32855-572c-434b-9f41-cf83f652dd08", "format": "json"}]: dispatch Feb 1 04:59:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:96e32855-572c-434b-9f41-cf83f652dd08, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:59:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:96e32855-572c-434b-9f41-cf83f652dd08, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:59:34 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:59:34.111+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '96e32855-572c-434b-9f41-cf83f652dd08' of type subvolume Feb 1 04:59:34 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '96e32855-572c-434b-9f41-cf83f652dd08' of type subvolume Feb 1 04:59:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "96e32855-572c-434b-9f41-cf83f652dd08", "force": true, "format": "json"}]: dispatch Feb 1 04:59:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:96e32855-572c-434b-9f41-cf83f652dd08, vol_name:cephfs) < "" Feb 1 04:59:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/96e32855-572c-434b-9f41-cf83f652dd08'' moved to trashcan Feb 1 04:59:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:59:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:96e32855-572c-434b-9f41-cf83f652dd08, vol_name:cephfs) < "" Feb 1 04:59:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e219 e219: 6 total, 6 up, 6 in Feb 1 04:59:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 146 MiB data, 908 MiB used, 41 GiB / 42 GiB avail; 267 KiB/s rd, 40 KiB/s wr, 366 op/s Feb 1 04:59:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 04:59:35 localhost systemd[1]: tmp-crun.1ohAsJ.mount: Deactivated successfully. Feb 1 04:59:35 localhost podman[315369]: 2026-02-01 09:59:35.866634026 +0000 UTC m=+0.078159647 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 1 04:59:35 localhost podman[315369]: 2026-02-01 09:59:35.879734844 +0000 UTC m=+0.091260515 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 04:59:35 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 04:59:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "auth_id": "eve47", "tenant_id": "9d23e4ae23d44fac9f67906e518759ed", "access_level": "rw", "format": "json"}]: dispatch Feb 1 04:59:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, tenant_id:9d23e4ae23d44fac9f67906e518759ed, vol_name:cephfs) < "" Feb 1 04:59:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Feb 1 04:59:36 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 1 04:59:36 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID eve47 with tenant 9d23e4ae23d44fac9f67906e518759ed Feb 1 04:59:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} v 0) Feb 1 04:59:36 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} : dispatch Feb 1 04:59:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, tenant_id:9d23e4ae23d44fac9f67906e518759ed, vol_name:cephfs) < "" Feb 1 04:59:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 146 MiB data, 908 MiB used, 41 GiB / 42 GiB avail; 183 KiB/s rd, 28 KiB/s wr, 250 op/s Feb 1 04:59:36 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 1 04:59:36 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} : dispatch Feb 1 04:59:36 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"} : dispatch Feb 1 04:59:36 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b", "osd", "allow rw pool=manila_data namespace=fsvolumens_9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "mon", "allow r"], "format": "json"}]': finished Feb 1 04:59:36 localhost nova_compute[274317]: 2026-02-01 09:59:36.459 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e220 e220: 6 total, 6 up, 6 in Feb 1 04:59:36 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Feb 1 04:59:37 localhost nova_compute[274317]: 2026-02-01 09:59:37.584 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 146 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 267 KiB/s rd, 65 KiB/s wr, 374 op/s Feb 1 04:59:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 04:59:38 localhost podman[315388]: 2026-02-01 09:59:38.867990695 +0000 UTC m=+0.079753226 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 04:59:38 localhost podman[315388]: 2026-02-01 09:59:38.880392012 +0000 UTC m=+0.092154583 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 04:59:38 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 04:59:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "auth_id": "eve47", "format": "json"}]: dispatch Feb 1 04:59:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Feb 1 04:59:39 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 1 04:59:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Feb 1 04:59:39 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Feb 1 04:59:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "auth_id": "eve47", "format": "json"}]: dispatch Feb 1 04:59:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:39 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Feb 1 04:59:39 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 1 04:59:39 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Feb 1 04:59:39 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Feb 1 04:59:39 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b Feb 1 04:59:39 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 04:59:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 146 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 202 KiB/s rd, 49 KiB/s wr, 283 op/s Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:40.938633) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939980938742, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2827, "num_deletes": 276, "total_data_size": 5484590, "memory_usage": 5668256, "flush_reason": "Manual Compaction"} Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939980960021, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 3567095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22399, "largest_seqno": 25220, "table_properties": {"data_size": 3556054, "index_size": 7098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 25594, "raw_average_key_size": 22, "raw_value_size": 3533067, "raw_average_value_size": 3074, "num_data_blocks": 301, "num_entries": 1149, "num_filter_entries": 1149, "num_deletions": 276, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939851, "oldest_key_time": 1769939851, "file_creation_time": 1769939980, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 21447 microseconds, and 10515 cpu microseconds. Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:40.960091) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 3567095 bytes OK Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:40.960120) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:40.962144) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:40.962167) EVENT_LOG_v1 {"time_micros": 1769939980962161, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:40.962191) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 5471569, prev total WAL file size 5471569, number of live WAL files 2. Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:40.963500) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(3483KB)], [33(18MB)] Feb 1 04:59:40 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939980963555, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 23027899, "oldest_snapshot_seqno": -1} Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 13243 keys, 21758735 bytes, temperature: kUnknown Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939981118671, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 21758735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 21681473, "index_size": 43049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33157, "raw_key_size": 355091, "raw_average_key_size": 26, "raw_value_size": 21454201, "raw_average_value_size": 1620, "num_data_blocks": 1623, "num_entries": 13243, "num_filter_entries": 13243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769939980, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:41.119011) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 21758735 bytes Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:41.120774) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.4 rd, 140.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 18.6 +0.0 blob) out(20.8 +0.0 blob), read-write-amplify(12.6) write-amplify(6.1) OK, records in: 13802, records dropped: 559 output_compression: NoCompression Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:41.120802) EVENT_LOG_v1 {"time_micros": 1769939981120789, "job": 18, "event": "compaction_finished", "compaction_time_micros": 155204, "compaction_time_cpu_micros": 55427, "output_level": 6, "num_output_files": 1, "total_output_size": 21758735, "num_input_records": 13802, "num_output_records": 13243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939981121404, "job": 18, "event": "table_file_deletion", "file_number": 35} Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939981123868, "job": 18, "event": "table_file_deletion", "file_number": 33} Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:40.963447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:41.124084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:41.124095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:41.124099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:41.124103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:41 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:41.124106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:41 localhost podman[315522]: 2026-02-01 09:59:41.189219307 +0000 UTC m=+0.094416484 container exec 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vcs-type=git, RELEASE=main, vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1764794109, maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.opencontainers.image.created=2025-12-08T17:28:53Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-12-08T17:28:53Z, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 04:59:41 localhost podman[315522]: 2026-02-01 09:59:41.322886772 +0000 UTC m=+0.228083919 container exec_died 39a3032afbb342ba7bd100116836fc9c218c1425256845645b2093290e19b07a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-crash-np0005604215, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=688f299ec3c1ef405209c1d311a7fbd93d01b6f5, org.opencontainers.image.created=2025-12-08T17:28:53Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , build-date=2025-12-08T17:28:53Z, ceph=True, release=1764794109, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph) Feb 1 04:59:41 localhost nova_compute[274317]: 2026-02-01 09:59:41.460 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:41.776 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:59:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:41.777 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:59:41 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:41.777 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:59:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e221 e221: 6 total, 6 up, 6 in Feb 1 04:59:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 04:59:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 04:59:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 146 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 29 KiB/s wr, 95 op/s Feb 1 04:59:42 localhost nova_compute[274317]: 2026-02-01 09:59:42.585 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:59:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:59:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:59:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:59:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:59:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:59:42 localhost ceph-mgr[278126]: [cephadm INFO root] Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:59:42 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:59:42 localhost ceph-mgr[278126]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:59:42 localhost ceph-mgr[278126]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:59:42 localhost ceph-mgr[278126]: [cephadm INFO root] Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:59:42 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:59:42 localhost ceph-mgr[278126]: [cephadm INFO root] Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:59:42 localhost ceph-mgr[278126]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 1 04:59:42 localhost ceph-mgr[278126]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:59:42 localhost ceph-mgr[278126]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:59:42 localhost ceph-mgr[278126]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:59:42 localhost ceph-mgr[278126]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 04:59:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:59:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 04:59:43 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev f3a89213-8821-4f90-884a-6194ac204334 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:59:43 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev f3a89213-8821-4f90-884a-6194ac204334 (Updating node-proxy deployment (+3 -> 3)) Feb 1 04:59:43 localhost ceph-mgr[278126]: [progress INFO root] Completed event f3a89213-8821-4f90-884a-6194ac204334 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 04:59:43 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 04:59:43 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604215.localdomain to 836.6M Feb 1 04:59:43 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604215.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604213.localdomain to 836.6M Feb 1 04:59:43 localhost ceph-mon[298604]: Adjusting osd_memory_target on np0005604212.localdomain to 836.6M Feb 1 04:59:43 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604212.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:59:43 localhost ceph-mon[298604]: Unable to set osd_memory_target on np0005604213.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 04:59:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:59:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "auth_id": "eve49", "format": "json"}]: dispatch Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Feb 1 04:59:44 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 1 04:59:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Feb 1 04:59:44 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "auth_id": "eve49", "format": "json"}]: dispatch Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e/cf2be5d6-9de6-4d4b-bfd9-aa075972364b Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 146 MiB data, 920 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 40 KiB/s wr, 113 op/s Feb 1 04:59:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "format": "json"}]: dispatch Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 04:59:44 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ef26968-45d9-4b40-a5b1-d54b2ff71a2e' of type subvolume Feb 1 04:59:44 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T09:59:44.451+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ef26968-45d9-4b40-a5b1-d54b2ff71a2e' of type subvolume Feb 1 04:59:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ef26968-45d9-4b40-a5b1-d54b2ff71a2e", "force": true, "format": "json"}]: dispatch Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9ef26968-45d9-4b40-a5b1-d54b2ff71a2e'' moved to trashcan Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 04:59:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ef26968-45d9-4b40-a5b1-d54b2ff71a2e, vol_name:cephfs) < "" Feb 1 04:59:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:59:44 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/671789745' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:59:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:59:44 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/671789745' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:59:44 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Feb 1 04:59:44 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 1 04:59:44 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Feb 1 04:59:44 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Feb 1 04:59:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:59:45 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2006111227' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:59:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:59:45 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2006111227' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:59:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e222 e222: 6 total, 6 up, 6 in Feb 1 04:59:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 146 MiB data, 920 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 11 KiB/s wr, 17 op/s Feb 1 04:59:46 localhost nova_compute[274317]: 2026-02-01 09:59:46.489 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:46 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 04:59:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 04:59:47 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 04:59:47 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e223 e223: 6 total, 6 up, 6 in Feb 1 04:59:47 localhost nova_compute[274317]: 2026-02-01 09:59:47.626 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 193 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 144 op/s Feb 1 04:59:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e224 e224: 6 total, 6 up, 6 in Feb 1 04:59:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 193 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 3.6 MiB/s wr, 133 op/s Feb 1 04:59:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e225 e225: 6 total, 6 up, 6 in Feb 1 04:59:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6010311e-11a6-4c95-b3ff-674156fa7f2b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 04:59:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b/.meta.tmp' Feb 1 04:59:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b/.meta.tmp' to config b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b/.meta' Feb 1 04:59:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6010311e-11a6-4c95-b3ff-674156fa7f2b", "format": "json"}]: dispatch Feb 1 04:59:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:59:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:59:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:59:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:59:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 04:59:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 04:59:51 localhost nova_compute[274317]: 2026-02-01 09:59:51.535 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.824675) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939991824750, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 573, "num_deletes": 258, "total_data_size": 656164, "memory_usage": 668112, "flush_reason": "Manual Compaction"} Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939991832021, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 430704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25225, "largest_seqno": 25793, "table_properties": {"data_size": 427432, "index_size": 1127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8498, "raw_average_key_size": 20, "raw_value_size": 420453, "raw_average_value_size": 1015, "num_data_blocks": 44, "num_entries": 414, "num_filter_entries": 414, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939981, "oldest_key_time": 1769939981, "file_creation_time": 1769939991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 7385 microseconds, and 2525 cpu microseconds. Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.832070) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 430704 bytes OK Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.832092) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.834197) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.834222) EVENT_LOG_v1 {"time_micros": 1769939991834215, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.834245) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 652675, prev total WAL file size 652999, number of live WAL files 2. Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.835359) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303133' seq:72057594037927935, type:22 .. '6C6F676D0034323635' seq:0, type:0; will stop at (end) Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(420KB)], [36(20MB)] Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939991835436, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 22189439, "oldest_snapshot_seqno": -1} Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 13112 keys, 21455668 bytes, temperature: kUnknown Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939991960006, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 21455668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 21380240, "index_size": 41535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32837, "raw_key_size": 353602, "raw_average_key_size": 26, "raw_value_size": 21156178, "raw_average_value_size": 1613, "num_data_blocks": 1548, "num_entries": 13112, "num_filter_entries": 13112, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769939991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.960414) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 21455668 bytes Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.962368) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.0 rd, 172.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 20.8 +0.0 blob) out(20.5 +0.0 blob), read-write-amplify(101.3) write-amplify(49.8) OK, records in: 13657, records dropped: 545 output_compression: NoCompression Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.962399) EVENT_LOG_v1 {"time_micros": 1769939991962386, "job": 20, "event": "compaction_finished", "compaction_time_micros": 124659, "compaction_time_cpu_micros": 56834, "output_level": 6, "num_output_files": 1, "total_output_size": 21455668, "num_input_records": 13657, "num_output_records": 13112, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939991962613, "job": 20, "event": "table_file_deletion", "file_number": 38} Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769939991965426, "job": 20, "event": "table_file_deletion", "file_number": 36} Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.835238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.965462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.965467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.965472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.965476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:51 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-09:59:51.965479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 04:59:52 localhost nova_compute[274317]: 2026-02-01 09:59:52.095 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:59:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 193 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 3.6 MiB/s wr, 133 op/s Feb 1 04:59:52 localhost nova_compute[274317]: 2026-02-01 09:59:52.661 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e226 e226: 6 total, 6 up, 6 in Feb 1 04:59:54 localhost nova_compute[274317]: 2026-02-01 09:59:54.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:59:54 localhost nova_compute[274317]: 2026-02-01 09:59:54.099 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 04:59:54 localhost nova_compute[274317]: 2026-02-01 09:59:54.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 04:59:54 localhost nova_compute[274317]: 2026-02-01 09:59:54.122 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 04:59:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6010311e-11a6-4c95-b3ff-674156fa7f2b", "snap_name": "967f54ee-5f61-45c6-877b-9621e93b6257", "format": "json"}]: dispatch Feb 1 04:59:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:967f54ee-5f61-45c6-877b-9621e93b6257, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:967f54ee-5f61-45c6-877b-9621e93b6257, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 193 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 148 KiB/s rd, 13 KiB/s wr, 198 op/s Feb 1 04:59:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:54.444 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 04:59:54 localhost nova_compute[274317]: 2026-02-01 09:59:54.445 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:54 localhost ovn_metadata_agent[158650]: 2026-02-01 09:59:54.446 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 04:59:55 localhost nova_compute[274317]: 2026-02-01 09:59:55.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:59:55 localhost nova_compute[274317]: 2026-02-01 09:59:55.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:59:55 localhost nova_compute[274317]: 2026-02-01 09:59:55.130 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:59:55 localhost nova_compute[274317]: 2026-02-01 09:59:55.130 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:59:55 localhost nova_compute[274317]: 2026-02-01 09:59:55.131 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:59:55 localhost nova_compute[274317]: 2026-02-01 09:59:55.131 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 04:59:55 localhost nova_compute[274317]: 2026-02-01 09:59:55.131 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:59:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 04:59:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:59:55 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4277162567' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:59:55 localhost nova_compute[274317]: 2026-02-01 09:59:55.587 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 04:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 04:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 04:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 04:59:56 localhost systemd[1]: tmp-crun.cpUAhx.mount: Deactivated successfully. Feb 1 04:59:56 localhost podman[315753]: 2026-02-01 09:59:56.047527206 +0000 UTC m=+0.081805702 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Feb 1 04:59:56 localhost podman[315754]: 2026-02-01 09:59:56.066142515 +0000 UTC m=+0.093985980 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2) Feb 1 04:59:56 localhost podman[315753]: 2026-02-01 09:59:56.126423334 +0000 UTC m=+0.160701860 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 04:59:56 localhost podman[315752]: 2026-02-01 09:59:56.138531121 +0000 UTC m=+0.172064173 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, release=1769056855, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible) Feb 1 04:59:56 localhost podman[315754]: 2026-02-01 09:59:56.141903776 +0000 UTC m=+0.169747241 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.150 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.152 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11549MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.152 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.152 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 04:59:56 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 04:59:56 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 04:59:56 localhost podman[315752]: 2026-02-01 09:59:56.217919015 +0000 UTC m=+0.251452137 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.openshift.expose-services=, release=1769056855, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, distribution-scope=public, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, version=9.7) Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.223 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.225 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 04:59:56 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.243 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 04:59:56 localhost podman[315759]: 2026-02-01 09:59:56.221791316 +0000 UTC m=+0.246744560 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 04:59:56 localhost podman[315759]: 2026-02-01 09:59:56.304783552 +0000 UTC m=+0.329736826 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 04:59:56 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 04:59:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 193 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 128 KiB/s rd, 12 KiB/s wr, 171 op/s Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.537 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 04:59:56 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/345388689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.684 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.691 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.707 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.709 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 04:59:56 localhost nova_compute[274317]: 2026-02-01 09:59:56.710 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 04:59:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e227 e227: 6 total, 6 up, 6 in Feb 1 04:59:57 localhost systemd[1]: tmp-crun.9byqj1.mount: Deactivated successfully. Feb 1 04:59:57 localhost nova_compute[274317]: 2026-02-01 09:59:57.663 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 04:59:57 localhost nova_compute[274317]: 2026-02-01 09:59:57.710 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:59:57 localhost nova_compute[274317]: 2026-02-01 09:59:57.710 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:59:57 localhost nova_compute[274317]: 2026-02-01 09:59:57.711 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:59:57 localhost nova_compute[274317]: 2026-02-01 09:59:57.711 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 04:59:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e228 e228: 6 total, 6 up, 6 in Feb 1 04:59:58 localhost nova_compute[274317]: 2026-02-01 09:59:58.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 04:59:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 04:59:58 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/142970444' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 04:59:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 04:59:58 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/142970444' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 04:59:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6010311e-11a6-4c95-b3ff-674156fa7f2b", "snap_name": "967f54ee-5f61-45c6-877b-9621e93b6257_d3fd438f-2f2b-4121-930e-8bd318b9b3ac", "force": true, "format": "json"}]: dispatch Feb 1 04:59:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:967f54ee-5f61-45c6-877b-9621e93b6257_d3fd438f-2f2b-4121-930e-8bd318b9b3ac, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b/.meta.tmp' Feb 1 04:59:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b/.meta.tmp' to config b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b/.meta' Feb 1 04:59:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:967f54ee-5f61-45c6-877b-9621e93b6257_d3fd438f-2f2b-4121-930e-8bd318b9b3ac, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6010311e-11a6-4c95-b3ff-674156fa7f2b", "snap_name": "967f54ee-5f61-45c6-877b-9621e93b6257", "force": true, "format": "json"}]: dispatch Feb 1 04:59:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:967f54ee-5f61-45c6-877b-9621e93b6257, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 193 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 29 KiB/s wr, 370 op/s Feb 1 04:59:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b/.meta.tmp' Feb 1 04:59:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b/.meta.tmp' to config b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b/.meta' Feb 1 04:59:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:967f54ee-5f61-45c6-877b-9621e93b6257, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 04:59:59 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e229 e229: 6 total, 6 up, 6 in Feb 1 05:00:00 localhost podman[236852]: time="2026-02-01T10:00:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:00:00 localhost podman[236852]: @ - - [01/Feb/2026:10:00:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:00:00 localhost podman[236852]: @ - - [01/Feb/2026:10:00:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18305 "" "Go-http-client/1.1" Feb 1 05:00:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 193 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 171 op/s Feb 1 05:00:00 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fc8c8d47-ce44-484d-a6aa-20ee79341f8c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fc8c8d47-ce44-484d-a6aa-20ee79341f8c, vol_name:cephfs) < "" Feb 1 05:00:00 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fc8c8d47-ce44-484d-a6aa-20ee79341f8c/.meta.tmp' Feb 1 05:00:00 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fc8c8d47-ce44-484d-a6aa-20ee79341f8c/.meta.tmp' to config b'/volumes/_nogroup/fc8c8d47-ce44-484d-a6aa-20ee79341f8c/.meta' Feb 1 05:00:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fc8c8d47-ce44-484d-a6aa-20ee79341f8c, vol_name:cephfs) < "" Feb 1 05:00:00 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fc8c8d47-ce44-484d-a6aa-20ee79341f8c", "format": "json"}]: dispatch Feb 1 05:00:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fc8c8d47-ce44-484d-a6aa-20ee79341f8c, vol_name:cephfs) < "" Feb 1 05:00:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fc8c8d47-ce44-484d-a6aa-20ee79341f8c, vol_name:cephfs) < "" Feb 1 05:00:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e230 e230: 6 total, 6 up, 6 in Feb 1 05:00:00 localhost ceph-mon[298604]: overall HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Feb 1 05:00:01 localhost nova_compute[274317]: 2026-02-01 10:00:01.095 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:01 localhost ovn_metadata_agent[158650]: 2026-02-01 10:00:01.448 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 05:00:01 localhost nova_compute[274317]: 2026-02-01 10:00:01.578 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:01 localhost openstack_network_exporter[239388]: ERROR 10:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:00:01 localhost openstack_network_exporter[239388]: Feb 1 05:00:01 localhost openstack_network_exporter[239388]: ERROR 10:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:00:01 localhost openstack_network_exporter[239388]: Feb 1 05:00:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:01 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2444475700' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:01 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2444475700' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6010311e-11a6-4c95-b3ff-674156fa7f2b", "format": "json"}]: dispatch Feb 1 05:00:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:01 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:01.788+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6010311e-11a6-4c95-b3ff-674156fa7f2b' of type subvolume Feb 1 05:00:01 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6010311e-11a6-4c95-b3ff-674156fa7f2b' of type subvolume Feb 1 05:00:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6010311e-11a6-4c95-b3ff-674156fa7f2b", "force": true, "format": "json"}]: dispatch Feb 1 05:00:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 05:00:01 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6010311e-11a6-4c95-b3ff-674156fa7f2b'' moved to trashcan Feb 1 05:00:01 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6010311e-11a6-4c95-b3ff-674156fa7f2b, vol_name:cephfs) < "" Feb 1 05:00:02 localhost nova_compute[274317]: 2026-02-01 10:00:02.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:02 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1007765160' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:02 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1007765160' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 193 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 186 op/s Feb 1 05:00:02 localhost nova_compute[274317]: 2026-02-01 10:00:02.690 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:04 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a75eaef7-5948-4b0c-93f4-48367ba74a09", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:04 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a75eaef7-5948-4b0c-93f4-48367ba74a09, vol_name:cephfs) < "" Feb 1 05:00:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 193 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 108 KiB/s rd, 3.3 MiB/s wr, 158 op/s Feb 1 05:00:04 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a75eaef7-5948-4b0c-93f4-48367ba74a09/.meta.tmp' Feb 1 05:00:04 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a75eaef7-5948-4b0c-93f4-48367ba74a09/.meta.tmp' to config b'/volumes/_nogroup/a75eaef7-5948-4b0c-93f4-48367ba74a09/.meta' Feb 1 05:00:04 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a75eaef7-5948-4b0c-93f4-48367ba74a09, vol_name:cephfs) < "" Feb 1 05:00:04 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a75eaef7-5948-4b0c-93f4-48367ba74a09", "format": "json"}]: dispatch Feb 1 05:00:04 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a75eaef7-5948-4b0c-93f4-48367ba74a09, vol_name:cephfs) < "" Feb 1 05:00:04 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a75eaef7-5948-4b0c-93f4-48367ba74a09, vol_name:cephfs) < "" Feb 1 05:00:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 193 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 2.7 MiB/s wr, 128 op/s Feb 1 05:00:06 localhost nova_compute[274317]: 2026-02-01 10:00:06.582 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:00:06 localhost podman[315856]: 2026-02-01 10:00:06.827542693 +0000 UTC m=+0.084451263 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:00:06 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e231 e231: 6 total, 6 up, 6 in Feb 1 05:00:06 localhost podman[315856]: 2026-02-01 10:00:06.867853919 +0000 UTC m=+0.124762449 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:00:06 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:00:07 localhost ovn_controller[152787]: 2026-02-01T10:00:07Z|00248|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Feb 1 05:00:07 localhost nova_compute[274317]: 2026-02-01 10:00:07.694 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a75eaef7-5948-4b0c-93f4-48367ba74a09", "format": "json"}]: dispatch Feb 1 05:00:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a75eaef7-5948-4b0c-93f4-48367ba74a09, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a75eaef7-5948-4b0c-93f4-48367ba74a09, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:08 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a75eaef7-5948-4b0c-93f4-48367ba74a09' of type subvolume Feb 1 05:00:08 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:08.037+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a75eaef7-5948-4b0c-93f4-48367ba74a09' of type subvolume Feb 1 05:00:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a75eaef7-5948-4b0c-93f4-48367ba74a09", "force": true, "format": "json"}]: dispatch Feb 1 05:00:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a75eaef7-5948-4b0c-93f4-48367ba74a09, vol_name:cephfs) < "" Feb 1 05:00:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a75eaef7-5948-4b0c-93f4-48367ba74a09'' moved to trashcan Feb 1 05:00:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a75eaef7-5948-4b0c-93f4-48367ba74a09, vol_name:cephfs) < "" Feb 1 05:00:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 193 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 147 op/s Feb 1 05:00:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:00:09 localhost podman[315875]: 2026-02-01 10:00:09.867795745 +0000 UTC m=+0.083980959 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 05:00:09 localhost podman[315875]: 2026-02-01 10:00:09.884693041 +0000 UTC m=+0.100878255 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:00:09 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:00:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 193 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 125 op/s Feb 1 05:00:10 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cf68d29d-a061-4145-ba2b-6bee3a2be2df", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:10 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cf68d29d-a061-4145-ba2b-6bee3a2be2df, vol_name:cephfs) < "" Feb 1 05:00:10 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cf68d29d-a061-4145-ba2b-6bee3a2be2df/.meta.tmp' Feb 1 05:00:10 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf68d29d-a061-4145-ba2b-6bee3a2be2df/.meta.tmp' to config b'/volumes/_nogroup/cf68d29d-a061-4145-ba2b-6bee3a2be2df/.meta' Feb 1 05:00:10 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cf68d29d-a061-4145-ba2b-6bee3a2be2df, vol_name:cephfs) < "" Feb 1 05:00:10 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cf68d29d-a061-4145-ba2b-6bee3a2be2df", "format": "json"}]: dispatch Feb 1 05:00:10 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cf68d29d-a061-4145-ba2b-6bee3a2be2df, vol_name:cephfs) < "" Feb 1 05:00:10 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cf68d29d-a061-4145-ba2b-6bee3a2be2df, vol_name:cephfs) < "" Feb 1 05:00:11 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fc8c8d47-ce44-484d-a6aa-20ee79341f8c", "format": "json"}]: dispatch Feb 1 05:00:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fc8c8d47-ce44-484d-a6aa-20ee79341f8c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fc8c8d47-ce44-484d-a6aa-20ee79341f8c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:11 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:11.319+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fc8c8d47-ce44-484d-a6aa-20ee79341f8c' of type subvolume Feb 1 05:00:11 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fc8c8d47-ce44-484d-a6aa-20ee79341f8c' of type subvolume Feb 1 05:00:11 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fc8c8d47-ce44-484d-a6aa-20ee79341f8c", "force": true, "format": "json"}]: dispatch Feb 1 05:00:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fc8c8d47-ce44-484d-a6aa-20ee79341f8c, vol_name:cephfs) < "" Feb 1 05:00:11 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fc8c8d47-ce44-484d-a6aa-20ee79341f8c'' moved to trashcan Feb 1 05:00:11 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fc8c8d47-ce44-484d-a6aa-20ee79341f8c, vol_name:cephfs) < "" Feb 1 05:00:11 localhost nova_compute[274317]: 2026-02-01 10:00:11.585 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:12 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:12 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2967673012' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:12 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:12 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2967673012' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 193 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 117 op/s Feb 1 05:00:12 localhost nova_compute[274317]: 2026-02-01 10:00:12.730 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:12 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:12 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3236207941' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:12 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:12 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3236207941' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:13 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6ef0dca5-087f-47f5-b456-3a93c05421f7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:13 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6ef0dca5-087f-47f5-b456-3a93c05421f7, vol_name:cephfs) < "" Feb 1 05:00:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 193 MiB data, 1010 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 158 op/s Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6ef0dca5-087f-47f5-b456-3a93c05421f7/.meta.tmp' Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6ef0dca5-087f-47f5-b456-3a93c05421f7/.meta.tmp' to config b'/volumes/_nogroup/6ef0dca5-087f-47f5-b456-3a93c05421f7/.meta' Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6ef0dca5-087f-47f5-b456-3a93c05421f7, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6ef0dca5-087f-47f5-b456-3a93c05421f7", "format": "json"}]: dispatch Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6ef0dca5-087f-47f5-b456-3a93c05421f7, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6ef0dca5-087f-47f5-b456-3a93c05421f7, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "53f5c0b7-79c3-4730-936f-6925a39bf1db", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db/.meta.tmp' Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db/.meta.tmp' to config b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db/.meta' Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "53f5c0b7-79c3-4730-936f-6925a39bf1db", "format": "json"}]: dispatch Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7f3137f1-669c-444c-94c7-6fef11988c8f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f3137f1-669c-444c-94c7-6fef11988c8f, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7f3137f1-669c-444c-94c7-6fef11988c8f/.meta.tmp' Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7f3137f1-669c-444c-94c7-6fef11988c8f/.meta.tmp' to config b'/volumes/_nogroup/7f3137f1-669c-444c-94c7-6fef11988c8f/.meta' Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f3137f1-669c-444c-94c7-6fef11988c8f, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7f3137f1-669c-444c-94c7-6fef11988c8f", "format": "json"}]: dispatch Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f3137f1-669c-444c-94c7-6fef11988c8f, vol_name:cephfs) < "" Feb 1 05:00:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f3137f1-669c-444c-94c7-6fef11988c8f, vol_name:cephfs) < "" Feb 1 05:00:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e232 e232: 6 total, 6 up, 6 in Feb 1 05:00:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 193 MiB data, 1010 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 166 op/s Feb 1 05:00:16 localhost nova_compute[274317]: 2026-02-01 10:00:16.629 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:17 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:17 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/701977625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:17 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:17 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/701977625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "53f5c0b7-79c3-4730-936f-6925a39bf1db", "snap_name": "066e8414-0f7f-4d94-8ce3-b9f1dd2fb571", "format": "json"}]: dispatch Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:066e8414-0f7f-4d94-8ce3-b9f1dd2fb571, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:066e8414-0f7f-4d94-8ce3-b9f1dd2fb571, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6ef0dca5-087f-47f5-b456-3a93c05421f7", "format": "json"}]: dispatch Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6ef0dca5-087f-47f5-b456-3a93c05421f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6ef0dca5-087f-47f5-b456-3a93c05421f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:17 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:17.612+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6ef0dca5-087f-47f5-b456-3a93c05421f7' of type subvolume Feb 1 05:00:17 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6ef0dca5-087f-47f5-b456-3a93c05421f7' of type subvolume Feb 1 05:00:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6ef0dca5-087f-47f5-b456-3a93c05421f7", "force": true, "format": "json"}]: dispatch Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6ef0dca5-087f-47f5-b456-3a93c05421f7, vol_name:cephfs) < "" Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6ef0dca5-087f-47f5-b456-3a93c05421f7'' moved to trashcan Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6ef0dca5-087f-47f5-b456-3a93c05421f7, vol_name:cephfs) < "" Feb 1 05:00:17 localhost nova_compute[274317]: 2026-02-01 10:00:17.781 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7f3137f1-669c-444c-94c7-6fef11988c8f", "format": "json"}]: dispatch Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7f3137f1-669c-444c-94c7-6fef11988c8f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7f3137f1-669c-444c-94c7-6fef11988c8f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:17 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:17.915+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f3137f1-669c-444c-94c7-6fef11988c8f' of type subvolume Feb 1 05:00:17 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f3137f1-669c-444c-94c7-6fef11988c8f' of type subvolume Feb 1 05:00:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7f3137f1-669c-444c-94c7-6fef11988c8f", "force": true, "format": "json"}]: dispatch Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f3137f1-669c-444c-94c7-6fef11988c8f, vol_name:cephfs) < "" Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7f3137f1-669c-444c-94c7-6fef11988c8f'' moved to trashcan Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f3137f1-669c-444c-94c7-6fef11988c8f, vol_name:cephfs) < "" Feb 1 05:00:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 193 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 128 KiB/s rd, 2.2 MiB/s wr, 188 op/s Feb 1 05:00:19 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e233 e233: 6 total, 6 up, 6 in Feb 1 05:00:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:20 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1482730681' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:20 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1482730681' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e234 e234: 6 total, 6 up, 6 in Feb 1 05:00:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 193 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 33 KiB/s wr, 74 op/s Feb 1 05:00:20 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7671ca1-51ec-4e8e-b389-8a8c50c13461", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7671ca1-51ec-4e8e-b389-8a8c50c13461, vol_name:cephfs) < "" Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f7671ca1-51ec-4e8e-b389-8a8c50c13461/.meta.tmp' Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f7671ca1-51ec-4e8e-b389-8a8c50c13461/.meta.tmp' to config b'/volumes/_nogroup/f7671ca1-51ec-4e8e-b389-8a8c50c13461/.meta' Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7671ca1-51ec-4e8e-b389-8a8c50c13461, vol_name:cephfs) < "" Feb 1 05:00:21 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7671ca1-51ec-4e8e-b389-8a8c50c13461", "format": "json"}]: dispatch Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7671ca1-51ec-4e8e-b389-8a8c50c13461, vol_name:cephfs) < "" Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7671ca1-51ec-4e8e-b389-8a8c50c13461, vol_name:cephfs) < "" Feb 1 05:00:21 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "53f5c0b7-79c3-4730-936f-6925a39bf1db", "snap_name": "066e8414-0f7f-4d94-8ce3-b9f1dd2fb571_3455eb51-47e6-416a-8a60-b0a5234ab5f1", "force": true, "format": "json"}]: dispatch Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:066e8414-0f7f-4d94-8ce3-b9f1dd2fb571_3455eb51-47e6-416a-8a60-b0a5234ab5f1, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db/.meta.tmp' Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db/.meta.tmp' to config b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db/.meta' Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:066e8414-0f7f-4d94-8ce3-b9f1dd2fb571_3455eb51-47e6-416a-8a60-b0a5234ab5f1, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:21 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "53f5c0b7-79c3-4730-936f-6925a39bf1db", "snap_name": "066e8414-0f7f-4d94-8ce3-b9f1dd2fb571", "force": true, "format": "json"}]: dispatch Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:066e8414-0f7f-4d94-8ce3-b9f1dd2fb571, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db/.meta.tmp' Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db/.meta.tmp' to config b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db/.meta' Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:066e8414-0f7f-4d94-8ce3-b9f1dd2fb571, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_10:00:21 Feb 1 05:00:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 05:00:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 05:00:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['images', 'backups', '.mgr', 'manila_data', 'volumes', 'vms', 'manila_metadata'] Feb 1 05:00:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:00:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:00:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3695598619' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:21 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3695598619' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:21 localhost nova_compute[274317]: 2026-02-01 10:00:21.676 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014871085752838264 of space, bias 1.0, pg target 0.29692601219833736 quantized to 32 (current 32) Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:00:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00011832164293690675 of space, bias 4.0, pg target 0.09418402777777778 quantized to 16 (current 16) Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:00:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:00:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 193 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 32 KiB/s wr, 70 op/s Feb 1 05:00:22 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:22 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/233384275' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:22 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:22 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/233384275' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:22 localhost nova_compute[274317]: 2026-02-01 10:00:22.821 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:23 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4276124241' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:23 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4276124241' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:23 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1249637887' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:23 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1249637887' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 194 MiB data, 1019 MiB used, 41 GiB / 42 GiB avail; 124 KiB/s rd, 59 KiB/s wr, 177 op/s Feb 1 05:00:24 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "53f5c0b7-79c3-4730-936f-6925a39bf1db", "format": "json"}]: dispatch Feb 1 05:00:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:24 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53f5c0b7-79c3-4730-936f-6925a39bf1db' of type subvolume Feb 1 05:00:24 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:24.374+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53f5c0b7-79c3-4730-936f-6925a39bf1db' of type subvolume Feb 1 05:00:24 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "53f5c0b7-79c3-4730-936f-6925a39bf1db", "force": true, "format": "json"}]: dispatch Feb 1 05:00:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:24 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/53f5c0b7-79c3-4730-936f-6925a39bf1db'' moved to trashcan Feb 1 05:00:24 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53f5c0b7-79c3-4730-936f-6925a39bf1db, vol_name:cephfs) < "" Feb 1 05:00:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:25 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2046523029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:25 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2046523029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7671ca1-51ec-4e8e-b389-8a8c50c13461", "format": "json"}]: dispatch Feb 1 05:00:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f7671ca1-51ec-4e8e-b389-8a8c50c13461, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f7671ca1-51ec-4e8e-b389-8a8c50c13461, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:25 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7671ca1-51ec-4e8e-b389-8a8c50c13461' of type subvolume Feb 1 05:00:25 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:25.104+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7671ca1-51ec-4e8e-b389-8a8c50c13461' of type subvolume Feb 1 05:00:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7671ca1-51ec-4e8e-b389-8a8c50c13461", "force": true, "format": "json"}]: dispatch Feb 1 05:00:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7671ca1-51ec-4e8e-b389-8a8c50c13461, vol_name:cephfs) < "" Feb 1 05:00:25 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f7671ca1-51ec-4e8e-b389-8a8c50c13461'' moved to trashcan Feb 1 05:00:25 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7671ca1-51ec-4e8e-b389-8a8c50c13461, vol_name:cephfs) < "" Feb 1 05:00:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 194 MiB data, 1019 MiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 34 KiB/s wr, 121 op/s Feb 1 05:00:26 localhost nova_compute[274317]: 2026-02-01 10:00:26.715 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:00:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e235 e235: 6 total, 6 up, 6 in Feb 1 05:00:26 localhost podman[315898]: 2026-02-01 10:00:26.879414352 +0000 UTC m=+0.092952618 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, build-date=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter) Feb 1 05:00:26 localhost systemd[1]: tmp-crun.fIIOXS.mount: Deactivated successfully. Feb 1 05:00:26 localhost podman[315906]: 2026-02-01 10:00:26.914999472 +0000 UTC m=+0.117162693 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 05:00:26 localhost podman[315898]: 2026-02-01 10:00:26.922545167 +0000 UTC m=+0.136083352 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, version=9.7, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, release=1769056855, maintainer=Red Hat, Inc.) Feb 1 05:00:26 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:00:26 localhost podman[315900]: 2026-02-01 10:00:26.979174532 +0000 UTC m=+0.185143001 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Feb 1 05:00:27 localhost podman[315900]: 2026-02-01 10:00:27.02146142 +0000 UTC m=+0.227429909 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 05:00:27 localhost podman[315899]: 2026-02-01 10:00:27.030239613 +0000 UTC m=+0.238831084 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Feb 1 05:00:27 localhost podman[315899]: 2026-02-01 10:00:27.034916049 +0000 UTC m=+0.243507540 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Feb 1 05:00:27 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:00:27 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:00:27 localhost podman[315906]: 2026-02-01 10:00:27.053519989 +0000 UTC m=+0.255683270 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 05:00:27 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:00:27 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7d49ea05-c26a-4630-aaf8-e7e0481c4193", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:27 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:27 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193/.meta.tmp' Feb 1 05:00:27 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193/.meta.tmp' to config b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193/.meta' Feb 1 05:00:27 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:27 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7d49ea05-c26a-4630-aaf8-e7e0481c4193", "format": "json"}]: dispatch Feb 1 05:00:27 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:27 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:27 localhost nova_compute[274317]: 2026-02-01 10:00:27.864 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:27 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e236 e236: 6 total, 6 up, 6 in Feb 1 05:00:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 194 MiB data, 1024 MiB used, 41 GiB / 42 GiB avail; 136 KiB/s rd, 58 KiB/s wr, 190 op/s Feb 1 05:00:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c7efa303-bbfc-4c39-877e-b2f35a82b5a0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c7efa303-bbfc-4c39-877e-b2f35a82b5a0, vol_name:cephfs) < "" Feb 1 05:00:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c7efa303-bbfc-4c39-877e-b2f35a82b5a0/.meta.tmp' Feb 1 05:00:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c7efa303-bbfc-4c39-877e-b2f35a82b5a0/.meta.tmp' to config b'/volumes/_nogroup/c7efa303-bbfc-4c39-877e-b2f35a82b5a0/.meta' Feb 1 05:00:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c7efa303-bbfc-4c39-877e-b2f35a82b5a0, vol_name:cephfs) < "" Feb 1 05:00:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c7efa303-bbfc-4c39-877e-b2f35a82b5a0", "format": "json"}]: dispatch Feb 1 05:00:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c7efa303-bbfc-4c39-877e-b2f35a82b5a0, vol_name:cephfs) < "" Feb 1 05:00:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c7efa303-bbfc-4c39-877e-b2f35a82b5a0, vol_name:cephfs) < "" Feb 1 05:00:30 localhost podman[236852]: time="2026-02-01T10:00:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:00:30 localhost podman[236852]: @ - - [01/Feb/2026:10:00:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:00:30 localhost podman[236852]: @ - - [01/Feb/2026:10:00:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18296 "" "Go-http-client/1.1" Feb 1 05:00:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:30 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:00:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 194 MiB data, 1024 MiB used, 41 GiB / 42 GiB avail; 136 KiB/s rd, 58 KiB/s wr, 190 op/s Feb 1 05:00:30 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:00:30 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:00:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:00:30 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "format": "json"}]: dispatch Feb 1 05:00:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:00:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:00:30 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7d49ea05-c26a-4630-aaf8-e7e0481c4193", "snap_name": "d633f359-53f0-46c3-94ff-90f64f1e4469", "format": "json"}]: dispatch Feb 1 05:00:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d633f359-53f0-46c3-94ff-90f64f1e4469, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d633f359-53f0-46c3-94ff-90f64f1e4469, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:31 localhost openstack_network_exporter[239388]: ERROR 10:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:00:31 localhost openstack_network_exporter[239388]: Feb 1 05:00:31 localhost openstack_network_exporter[239388]: ERROR 10:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:00:31 localhost openstack_network_exporter[239388]: Feb 1 05:00:31 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f/.meta.tmp' Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f/.meta.tmp' to config b'/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f/.meta' Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:00:31 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "format": "json"}]: dispatch Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:00:31 localhost nova_compute[274317]: 2026-02-01 10:00:31.717 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:31 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c7efa303-bbfc-4c39-877e-b2f35a82b5a0", "format": "json"}]: dispatch Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c7efa303-bbfc-4c39-877e-b2f35a82b5a0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c7efa303-bbfc-4c39-877e-b2f35a82b5a0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:31 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c7efa303-bbfc-4c39-877e-b2f35a82b5a0' of type subvolume Feb 1 05:00:31 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:31.836+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c7efa303-bbfc-4c39-877e-b2f35a82b5a0' of type subvolume Feb 1 05:00:31 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c7efa303-bbfc-4c39-877e-b2f35a82b5a0", "force": true, "format": "json"}]: dispatch Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c7efa303-bbfc-4c39-877e-b2f35a82b5a0, vol_name:cephfs) < "" Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c7efa303-bbfc-4c39-877e-b2f35a82b5a0'' moved to trashcan Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c7efa303-bbfc-4c39-877e-b2f35a82b5a0, vol_name:cephfs) < "" Feb 1 05:00:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 194 MiB data, 1024 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 24 KiB/s wr, 68 op/s Feb 1 05:00:32 localhost nova_compute[274317]: 2026-02-01 10:00:32.901 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e237 e237: 6 total, 6 up, 6 in Feb 1 05:00:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4ba7af79-0f85-4db2-8701-a9110d019002", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ba7af79-0f85-4db2-8701-a9110d019002, vol_name:cephfs) < "" Feb 1 05:00:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4ba7af79-0f85-4db2-8701-a9110d019002/.meta.tmp' Feb 1 05:00:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4ba7af79-0f85-4db2-8701-a9110d019002/.meta.tmp' to config b'/volumes/_nogroup/4ba7af79-0f85-4db2-8701-a9110d019002/.meta' Feb 1 05:00:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ba7af79-0f85-4db2-8701-a9110d019002, vol_name:cephfs) < "" Feb 1 05:00:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4ba7af79-0f85-4db2-8701-a9110d019002", "format": "json"}]: dispatch Feb 1 05:00:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ba7af79-0f85-4db2-8701-a9110d019002, vol_name:cephfs) < "" Feb 1 05:00:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ba7af79-0f85-4db2-8701-a9110d019002, vol_name:cephfs) < "" Feb 1 05:00:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 194 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 63 KiB/s wr, 124 op/s Feb 1 05:00:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7d49ea05-c26a-4630-aaf8-e7e0481c4193", "snap_name": "d633f359-53f0-46c3-94ff-90f64f1e4469_14a1ff06-8b18-4cb9-86da-d23110370acd", "force": true, "format": "json"}]: dispatch Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d633f359-53f0-46c3-94ff-90f64f1e4469_14a1ff06-8b18-4cb9-86da-d23110370acd, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193/.meta.tmp' Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193/.meta.tmp' to config b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193/.meta' Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d633f359-53f0-46c3-94ff-90f64f1e4469_14a1ff06-8b18-4cb9-86da-d23110370acd, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7d49ea05-c26a-4630-aaf8-e7e0481c4193", "snap_name": "d633f359-53f0-46c3-94ff-90f64f1e4469", "force": true, "format": "json"}]: dispatch Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d633f359-53f0-46c3-94ff-90f64f1e4469, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193/.meta.tmp' Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193/.meta.tmp' to config b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193/.meta' Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d633f359-53f0-46c3-94ff-90f64f1e4469, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ef5904d0-6de5-446a-a091-edb3ad7abb31", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31/.meta.tmp' Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31/.meta.tmp' to config b'/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31/.meta' Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:00:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ef5904d0-6de5-446a-a091-edb3ad7abb31", "format": "json"}]: dispatch Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:00:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:00:35 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "67f16e44-b42c-4fb2-9bca-83ca589fff39", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:67f16e44-b42c-4fb2-9bca-83ca589fff39, vol_name:cephfs) < "" Feb 1 05:00:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/67f16e44-b42c-4fb2-9bca-83ca589fff39/.meta.tmp' Feb 1 05:00:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/67f16e44-b42c-4fb2-9bca-83ca589fff39/.meta.tmp' to config b'/volumes/_nogroup/67f16e44-b42c-4fb2-9bca-83ca589fff39/.meta' Feb 1 05:00:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:67f16e44-b42c-4fb2-9bca-83ca589fff39, vol_name:cephfs) < "" Feb 1 05:00:35 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "67f16e44-b42c-4fb2-9bca-83ca589fff39", "format": "json"}]: dispatch Feb 1 05:00:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:67f16e44-b42c-4fb2-9bca-83ca589fff39, vol_name:cephfs) < "" Feb 1 05:00:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:67f16e44-b42c-4fb2-9bca-83ca589fff39, vol_name:cephfs) < "" Feb 1 05:00:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 194 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 34 KiB/s wr, 45 op/s Feb 1 05:00:36 localhost nova_compute[274317]: 2026-02-01 10:00:36.744 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "4ba7af79-0f85-4db2-8701-a9110d019002", "new_size": 2147483648, "format": "json"}]: dispatch Feb 1 05:00:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4ba7af79-0f85-4db2-8701-a9110d019002, vol_name:cephfs) < "" Feb 1 05:00:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e238 e238: 6 total, 6 up, 6 in Feb 1 05:00:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4ba7af79-0f85-4db2-8701-a9110d019002, vol_name:cephfs) < "" Feb 1 05:00:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:36 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2729393937' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:36 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2729393937' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:37 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7d49ea05-c26a-4630-aaf8-e7e0481c4193", "format": "json"}]: dispatch Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:37 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7d49ea05-c26a-4630-aaf8-e7e0481c4193' of type subvolume Feb 1 05:00:37 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:37.692+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7d49ea05-c26a-4630-aaf8-e7e0481c4193' of type subvolume Feb 1 05:00:37 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7d49ea05-c26a-4630-aaf8-e7e0481c4193", "force": true, "format": "json"}]: dispatch Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7d49ea05-c26a-4630-aaf8-e7e0481c4193'' moved to trashcan Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d49ea05-c26a-4630-aaf8-e7e0481c4193, vol_name:cephfs) < "" Feb 1 05:00:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:00:37 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e239 e239: 6 total, 6 up, 6 in Feb 1 05:00:37 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d3128e87-da55-4740-ae37-00f9b18ac824", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:37 localhost podman[315983]: 2026-02-01 10:00:37.878516937 +0000 UTC m=+0.089661385 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:37 localhost podman[315983]: 2026-02-01 10:00:37.914650813 +0000 UTC m=+0.125795191 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 05:00:37 localhost nova_compute[274317]: 2026-02-01 10:00:37.949 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:37 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824/.meta.tmp' Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824/.meta.tmp' to config b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824/.meta' Feb 1 05:00:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:38 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d3128e87-da55-4740-ae37-00f9b18ac824", "format": "json"}]: dispatch Feb 1 05:00:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 194 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 120 KiB/s rd, 96 KiB/s wr, 176 op/s Feb 1 05:00:38 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ef5904d0-6de5-446a-a091-edb3ad7abb31", "auth_id": "Joe", "tenant_id": "d32ed6e558674454a1648ebe57d1a805", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:00:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, tenant_id:d32ed6e558674454a1648ebe57d1a805, vol_name:cephfs) < "" Feb 1 05:00:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Feb 1 05:00:38 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 1 05:00:38 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID Joe with tenant d32ed6e558674454a1648ebe57d1a805 Feb 1 05:00:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31/a9d003d6-2a9d-4a04-919b-f3994828d27e", "osd", "allow rw pool=manila_data namespace=fsvolumens_ef5904d0-6de5-446a-a091-edb3ad7abb31", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:00:38 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31/a9d003d6-2a9d-4a04-919b-f3994828d27e", "osd", "allow rw pool=manila_data namespace=fsvolumens_ef5904d0-6de5-446a-a091-edb3ad7abb31", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:00:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, tenant_id:d32ed6e558674454a1648ebe57d1a805, vol_name:cephfs) < "" Feb 1 05:00:38 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 1 05:00:38 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31/a9d003d6-2a9d-4a04-919b-f3994828d27e", "osd", "allow rw pool=manila_data namespace=fsvolumens_ef5904d0-6de5-446a-a091-edb3ad7abb31", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:00:38 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31/a9d003d6-2a9d-4a04-919b-f3994828d27e", "osd", "allow rw pool=manila_data namespace=fsvolumens_ef5904d0-6de5-446a-a091-edb3ad7abb31", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:00:38 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31/a9d003d6-2a9d-4a04-919b-f3994828d27e", "osd", "allow rw pool=manila_data namespace=fsvolumens_ef5904d0-6de5-446a-a091-edb3ad7abb31", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:00:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e240 e240: 6 total, 6 up, 6 in Feb 1 05:00:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "67f16e44-b42c-4fb2-9bca-83ca589fff39", "format": "json"}]: dispatch Feb 1 05:00:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:67f16e44-b42c-4fb2-9bca-83ca589fff39, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:67f16e44-b42c-4fb2-9bca-83ca589fff39, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:39 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:39.667+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '67f16e44-b42c-4fb2-9bca-83ca589fff39' of type subvolume Feb 1 05:00:39 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '67f16e44-b42c-4fb2-9bca-83ca589fff39' of type subvolume Feb 1 05:00:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "67f16e44-b42c-4fb2-9bca-83ca589fff39", "force": true, "format": "json"}]: dispatch Feb 1 05:00:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:67f16e44-b42c-4fb2-9bca-83ca589fff39, vol_name:cephfs) < "" Feb 1 05:00:39 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/67f16e44-b42c-4fb2-9bca-83ca589fff39'' moved to trashcan Feb 1 05:00:39 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:67f16e44-b42c-4fb2-9bca-83ca589fff39, vol_name:cephfs) < "" Feb 1 05:00:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4ba7af79-0f85-4db2-8701-a9110d019002", "format": "json"}]: dispatch Feb 1 05:00:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4ba7af79-0f85-4db2-8701-a9110d019002, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4ba7af79-0f85-4db2-8701-a9110d019002, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:40 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:40.155+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ba7af79-0f85-4db2-8701-a9110d019002' of type subvolume Feb 1 05:00:40 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ba7af79-0f85-4db2-8701-a9110d019002' of type subvolume Feb 1 05:00:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4ba7af79-0f85-4db2-8701-a9110d019002", "force": true, "format": "json"}]: dispatch Feb 1 05:00:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ba7af79-0f85-4db2-8701-a9110d019002, vol_name:cephfs) < "" Feb 1 05:00:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4ba7af79-0f85-4db2-8701-a9110d019002'' moved to trashcan Feb 1 05:00:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ba7af79-0f85-4db2-8701-a9110d019002, vol_name:cephfs) < "" Feb 1 05:00:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 194 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 48 KiB/s wr, 111 op/s Feb 1 05:00:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:00:40 localhost podman[316002]: 2026-02-01 10:00:40.874058173 +0000 UTC m=+0.085133564 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 05:00:40 localhost podman[316002]: 2026-02-01 10:00:40.887656507 +0000 UTC m=+0.098731898 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 05:00:40 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:00:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3128e87-da55-4740-ae37-00f9b18ac824", "snap_name": "814fa479-ec06-41a9-8fda-1c2cd3e9cb9c", "format": "json"}]: dispatch Feb 1 05:00:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:814fa479-ec06-41a9-8fda-1c2cd3e9cb9c, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:814fa479-ec06-41a9-8fda-1c2cd3e9cb9c, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:41 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/.meta.tmp' Feb 1 05:00:41 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/.meta.tmp' to config b'/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/.meta' Feb 1 05:00:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "format": "json"}]: dispatch Feb 1 05:00:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:00:41.776 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:00:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:00:41.777 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:00:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:00:41.777 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:00:41 localhost nova_compute[274317]: 2026-02-01 10:00:41.788 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 194 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 48 KiB/s wr, 111 op/s Feb 1 05:00:42 localhost nova_compute[274317]: 2026-02-01 10:00:42.994 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bcbe03f2-a495-4378-a713-02be779827da", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bcbe03f2-a495-4378-a713-02be779827da, vol_name:cephfs) < "" Feb 1 05:00:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bcbe03f2-a495-4378-a713-02be779827da/.meta.tmp' Feb 1 05:00:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bcbe03f2-a495-4378-a713-02be779827da/.meta.tmp' to config b'/volumes/_nogroup/bcbe03f2-a495-4378-a713-02be779827da/.meta' Feb 1 05:00:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bcbe03f2-a495-4378-a713-02be779827da, vol_name:cephfs) < "" Feb 1 05:00:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bcbe03f2-a495-4378-a713-02be779827da", "format": "json"}]: dispatch Feb 1 05:00:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bcbe03f2-a495-4378-a713-02be779827da, vol_name:cephfs) < "" Feb 1 05:00:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bcbe03f2-a495-4378-a713-02be779827da, vol_name:cephfs) < "" Feb 1 05:00:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7740003d-21db-4610-b8f3-9babed626268", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7740003d-21db-4610-b8f3-9babed626268, vol_name:cephfs) < "" Feb 1 05:00:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 05:00:44 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 05:00:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 05:00:44 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:00:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 05:00:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7740003d-21db-4610-b8f3-9babed626268/.meta.tmp' Feb 1 05:00:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7740003d-21db-4610-b8f3-9babed626268/.meta.tmp' to config b'/volumes/_nogroup/7740003d-21db-4610-b8f3-9babed626268/.meta' Feb 1 05:00:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7740003d-21db-4610-b8f3-9babed626268, vol_name:cephfs) < "" Feb 1 05:00:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7740003d-21db-4610-b8f3-9babed626268", "format": "json"}]: dispatch Feb 1 05:00:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7740003d-21db-4610-b8f3-9babed626268, vol_name:cephfs) < "" Feb 1 05:00:44 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev e70c5903-4f05-4c3b-aada-7b3246cabb7b (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:00:44 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev e70c5903-4f05-4c3b-aada-7b3246cabb7b (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:00:44 localhost ceph-mgr[278126]: [progress INFO root] Completed event e70c5903-4f05-4c3b-aada-7b3246cabb7b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 05:00:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 05:00:44 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 05:00:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7740003d-21db-4610-b8f3-9babed626268, vol_name:cephfs) < "" Feb 1 05:00:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 94 KiB/s wr, 165 op/s Feb 1 05:00:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "auth_id": "Joe", "tenant_id": "f62ab07d2055417db4484bccb101ac2e", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:00:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, tenant_id:f62ab07d2055417db4484bccb101ac2e, vol_name:cephfs) < "" Feb 1 05:00:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Feb 1 05:00:44 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 1 05:00:44 localhost ceph-mgr[278126]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use Feb 1 05:00:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, tenant_id:f62ab07d2055417db4484bccb101ac2e, vol_name:cephfs) < "" Feb 1 05:00:44 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:44.833+0000 7f93ec23e640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Feb 1 05:00:44 localhost ceph-mgr[278126]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Feb 1 05:00:45 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:00:45 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:00:45 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 1 05:00:45 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3128e87-da55-4740-ae37-00f9b18ac824", "snap_name": "814fa479-ec06-41a9-8fda-1c2cd3e9cb9c_88bbe56d-e093-47bb-a7e1-b7e3d657fbd1", "force": true, "format": "json"}]: dispatch Feb 1 05:00:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:814fa479-ec06-41a9-8fda-1c2cd3e9cb9c_88bbe56d-e093-47bb-a7e1-b7e3d657fbd1, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824/.meta.tmp' Feb 1 05:00:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824/.meta.tmp' to config b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824/.meta' Feb 1 05:00:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:814fa479-ec06-41a9-8fda-1c2cd3e9cb9c_88bbe56d-e093-47bb-a7e1-b7e3d657fbd1, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:45 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3128e87-da55-4740-ae37-00f9b18ac824", "snap_name": "814fa479-ec06-41a9-8fda-1c2cd3e9cb9c", "force": true, "format": "json"}]: dispatch Feb 1 05:00:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:814fa479-ec06-41a9-8fda-1c2cd3e9cb9c, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824/.meta.tmp' Feb 1 05:00:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824/.meta.tmp' to config b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824/.meta' Feb 1 05:00:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:814fa479-ec06-41a9-8fda-1c2cd3e9cb9c, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 05:00:45 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 16K writes, 64K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s#012Cumulative WAL: 16K writes, 5709 syncs, 2.98 writes per sync, written: 0.05 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 39K keys, 11K commit groups, 1.0 writes per commit group, ingest: 26.10 MB, 0.04 MB/s#012Interval WAL: 11K writes, 4797 syncs, 2.34 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 05:00:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 49 KiB/s wr, 67 op/s Feb 1 05:00:46 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "bcbe03f2-a495-4378-a713-02be779827da", "new_size": 2147483648, "format": "json"}]: dispatch Feb 1 05:00:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bcbe03f2-a495-4378-a713-02be779827da, vol_name:cephfs) < "" Feb 1 05:00:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bcbe03f2-a495-4378-a713-02be779827da, vol_name:cephfs) < "" Feb 1 05:00:46 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 05:00:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 05:00:46 localhost nova_compute[274317]: 2026-02-01 10:00:46.820 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e241 e241: 6 total, 6 up, 6 in Feb 1 05:00:47 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:00:47 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e242 e242: 6 total, 6 up, 6 in Feb 1 05:00:48 localhost nova_compute[274317]: 2026-02-01 10:00:48.018 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 241 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 127 op/s Feb 1 05:00:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d3128e87-da55-4740-ae37-00f9b18ac824", "format": "json"}]: dispatch Feb 1 05:00:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d3128e87-da55-4740-ae37-00f9b18ac824, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d3128e87-da55-4740-ae37-00f9b18ac824, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:48.570+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3128e87-da55-4740-ae37-00f9b18ac824' of type subvolume Feb 1 05:00:48 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3128e87-da55-4740-ae37-00f9b18ac824' of type subvolume Feb 1 05:00:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d3128e87-da55-4740-ae37-00f9b18ac824", "force": true, "format": "json"}]: dispatch Feb 1 05:00:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d3128e87-da55-4740-ae37-00f9b18ac824'' moved to trashcan Feb 1 05:00:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3128e87-da55-4740-ae37-00f9b18ac824, vol_name:cephfs) < "" Feb 1 05:00:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "auth_id": "tempest-cephx-id-2026360705", "tenant_id": "f62ab07d2055417db4484bccb101ac2e", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:00:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2026360705, format:json, prefix:fs subvolume authorize, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, tenant_id:f62ab07d2055417db4484bccb101ac2e, vol_name:cephfs) < "" Feb 1 05:00:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2026360705", "format": "json"} v 0) Feb 1 05:00:48 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2026360705", "format": "json"} : dispatch Feb 1 05:00:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID tempest-cephx-id-2026360705 with tenant f62ab07d2055417db4484bccb101ac2e Feb 1 05:00:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026360705", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/8b5ea12c-b039-40af-b27d-ed5446708464", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:00:48 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026360705", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/8b5ea12c-b039-40af-b27d-ed5446708464", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:00:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2026360705, format:json, prefix:fs subvolume authorize, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, tenant_id:f62ab07d2055417db4484bccb101ac2e, vol_name:cephfs) < "" Feb 1 05:00:49 localhost ovn_metadata_agent[158650]: 2026-02-01 10:00:49.054 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:00:49 localhost ovn_metadata_agent[158650]: 2026-02-01 10:00:49.055 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 05:00:49 localhost nova_compute[274317]: 2026-02-01 10:00:49.089 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:49 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2026360705", "format": "json"} : dispatch Feb 1 05:00:49 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026360705", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/8b5ea12c-b039-40af-b27d-ed5446708464", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:00:49 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026360705", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/8b5ea12c-b039-40af-b27d-ed5446708464", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:00:49 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2026360705", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/8b5ea12c-b039-40af-b27d-ed5446708464", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:00:49 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7740003d-21db-4610-b8f3-9babed626268", "format": "json"}]: dispatch Feb 1 05:00:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7740003d-21db-4610-b8f3-9babed626268, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7740003d-21db-4610-b8f3-9babed626268, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:49 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:49.482+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7740003d-21db-4610-b8f3-9babed626268' of type subvolume Feb 1 05:00:49 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7740003d-21db-4610-b8f3-9babed626268' of type subvolume Feb 1 05:00:49 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7740003d-21db-4610-b8f3-9babed626268", "force": true, "format": "json"}]: dispatch Feb 1 05:00:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7740003d-21db-4610-b8f3-9babed626268, vol_name:cephfs) < "" Feb 1 05:00:49 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7740003d-21db-4610-b8f3-9babed626268'' moved to trashcan Feb 1 05:00:49 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7740003d-21db-4610-b8f3-9babed626268, vol_name:cephfs) < "" Feb 1 05:00:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bcbe03f2-a495-4378-a713-02be779827da", "format": "json"}]: dispatch Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bcbe03f2-a495-4378-a713-02be779827da, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bcbe03f2-a495-4378-a713-02be779827da, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:50 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:50.129+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bcbe03f2-a495-4378-a713-02be779827da' of type subvolume Feb 1 05:00:50 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bcbe03f2-a495-4378-a713-02be779827da' of type subvolume Feb 1 05:00:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bcbe03f2-a495-4378-a713-02be779827da", "force": true, "format": "json"}]: dispatch Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bcbe03f2-a495-4378-a713-02be779827da, vol_name:cephfs) < "" Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bcbe03f2-a495-4378-a713-02be779827da'' moved to trashcan Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bcbe03f2-a495-4378-a713-02be779827da, vol_name:cephfs) < "" Feb 1 05:00:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 1 05:00:50 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 16K writes, 62K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s#012Cumulative WAL: 16K writes, 5389 syncs, 3.08 writes per sync, written: 0.05 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 38K keys, 11K commit groups, 1.0 writes per commit group, ingest: 30.50 MB, 0.05 MB/s#012Interval WAL: 11K writes, 4649 syncs, 2.40 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 1 05:00:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 241 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 127 op/s Feb 1 05:00:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0/.meta.tmp' Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0/.meta.tmp' to config b'/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0/.meta' Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "format": "json"}]: dispatch Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:51 localhost ovn_metadata_agent[158650]: 2026-02-01 10:00:51.058 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 05:00:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e243 e243: 6 total, 6 up, 6 in Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:00:51 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "auth_id": "Joe", "format": "json"}]: dispatch Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'ebeb9c1e-187e-4fbb-8711-dc250e4ab635' Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:51 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "auth_id": "Joe", "format": "json"}]: dispatch Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/8b5ea12c-b039-40af-b27d-ed5446708464 Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:00:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:51 localhost nova_compute[274317]: 2026-02-01 10:00:51.853 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 241 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 3.6 MiB/s wr, 74 op/s Feb 1 05:00:53 localhost nova_compute[274317]: 2026-02-01 10:00:53.052 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:53 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1be67b6e-cf94-4bea-af19-93677534e470", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1be67b6e-cf94-4bea-af19-93677534e470, vol_name:cephfs) < "" Feb 1 05:00:53 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1be67b6e-cf94-4bea-af19-93677534e470/.meta.tmp' Feb 1 05:00:53 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1be67b6e-cf94-4bea-af19-93677534e470/.meta.tmp' to config b'/volumes/_nogroup/1be67b6e-cf94-4bea-af19-93677534e470/.meta' Feb 1 05:00:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1be67b6e-cf94-4bea-af19-93677534e470, vol_name:cephfs) < "" Feb 1 05:00:53 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1be67b6e-cf94-4bea-af19-93677534e470", "format": "json"}]: dispatch Feb 1 05:00:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1be67b6e-cf94-4bea-af19-93677534e470, vol_name:cephfs) < "" Feb 1 05:00:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1be67b6e-cf94-4bea-af19-93677534e470, vol_name:cephfs) < "" Feb 1 05:00:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e244 e244: 6 total, 6 up, 6 in Feb 1 05:00:53 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "auth_id": "tempest-cephx-id-397577304", "tenant_id": "ff1159417622494a84300007e5ed57fa", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:00:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-397577304, format:json, prefix:fs subvolume authorize, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, tenant_id:ff1159417622494a84300007e5ed57fa, vol_name:cephfs) < "" Feb 1 05:00:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-397577304", "format": "json"} v 0) Feb 1 05:00:53 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-397577304", "format": "json"} : dispatch Feb 1 05:00:53 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID tempest-cephx-id-397577304 with tenant ff1159417622494a84300007e5ed57fa Feb 1 05:00:53 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-397577304", "caps": ["mds", "allow rw path=/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0/e2b746e7-0779-4d7e-9231-873a16270c12", "osd", "allow rw pool=manila_data namespace=fsvolumens_ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:00:53 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-397577304", "caps": ["mds", "allow rw path=/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0/e2b746e7-0779-4d7e-9231-873a16270c12", "osd", "allow rw pool=manila_data namespace=fsvolumens_ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:00:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-397577304, format:json, prefix:fs subvolume authorize, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, tenant_id:ff1159417622494a84300007e5ed57fa, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cf68d29d-a061-4145-ba2b-6bee3a2be2df", "format": "json"}]: dispatch Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cf68d29d-a061-4145-ba2b-6bee3a2be2df, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cf68d29d-a061-4145-ba2b-6bee3a2be2df, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:54.069+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cf68d29d-a061-4145-ba2b-6bee3a2be2df' of type subvolume Feb 1 05:00:54 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cf68d29d-a061-4145-ba2b-6bee3a2be2df' of type subvolume Feb 1 05:00:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cf68d29d-a061-4145-ba2b-6bee3a2be2df", "force": true, "format": "json"}]: dispatch Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cf68d29d-a061-4145-ba2b-6bee3a2be2df, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cf68d29d-a061-4145-ba2b-6bee3a2be2df'' moved to trashcan Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cf68d29d-a061-4145-ba2b-6bee3a2be2df, vol_name:cephfs) < "" Feb 1 05:00:54 localhost nova_compute[274317]: 2026-02-01 10:00:54.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 64 KiB/s wr, 82 op/s Feb 1 05:00:54 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-397577304", "format": "json"} : dispatch Feb 1 05:00:54 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-397577304", "caps": ["mds", "allow rw path=/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0/e2b746e7-0779-4d7e-9231-873a16270c12", "osd", "allow rw pool=manila_data namespace=fsvolumens_ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:00:54 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-397577304", "caps": ["mds", "allow rw path=/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0/e2b746e7-0779-4d7e-9231-873a16270c12", "osd", "allow rw pool=manila_data namespace=fsvolumens_ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:00:54 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-397577304", "caps": ["mds", "allow rw path=/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0/e2b746e7-0779-4d7e-9231-873a16270c12", "osd", "allow rw pool=manila_data namespace=fsvolumens_ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:00:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "auth_id": "tempest-cephx-id-397577304", "format": "json"}]: dispatch Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-397577304, format:json, prefix:fs subvolume deauthorize, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-397577304", "format": "json"} v 0) Feb 1 05:00:54 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-397577304", "format": "json"} : dispatch Feb 1 05:00:54 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-397577304"} v 0) Feb 1 05:00:54 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-397577304"} : dispatch Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-397577304, format:json, prefix:fs subvolume deauthorize, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "auth_id": "tempest-cephx-id-397577304", "format": "json"}]: dispatch Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-397577304, format:json, prefix:fs subvolume evict, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-397577304, client_metadata.root=/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0/e2b746e7-0779-4d7e-9231-873a16270c12 Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-397577304, format:json, prefix:fs subvolume evict, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "format": "json"}]: dispatch Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:54.772+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea35db83-15a2-4b5c-b6d7-6a25b52b26b0' of type subvolume Feb 1 05:00:54 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea35db83-15a2-4b5c-b6d7-6a25b52b26b0' of type subvolume Feb 1 05:00:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea35db83-15a2-4b5c-b6d7-6a25b52b26b0", "force": true, "format": "json"}]: dispatch Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea35db83-15a2-4b5c-b6d7-6a25b52b26b0'' moved to trashcan Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea35db83-15a2-4b5c-b6d7-6a25b52b26b0, vol_name:cephfs) < "" Feb 1 05:00:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "auth_id": "tempest-cephx-id-2026360705", "format": "json"}]: dispatch Feb 1 05:00:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2026360705, format:json, prefix:fs subvolume deauthorize, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:55 localhost nova_compute[274317]: 2026-02-01 10:00:55.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2026360705", "format": "json"} v 0) Feb 1 05:00:55 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2026360705", "format": "json"} : dispatch Feb 1 05:00:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026360705"} v 0) Feb 1 05:00:55 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026360705"} : dispatch Feb 1 05:00:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2026360705, format:json, prefix:fs subvolume deauthorize, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "auth_id": "tempest-cephx-id-2026360705", "format": "json"}]: dispatch Feb 1 05:00:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2026360705, format:json, prefix:fs subvolume evict, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:00:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2026360705, client_metadata.root=/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635/8b5ea12c-b039-40af-b27d-ed5446708464 Feb 1 05:00:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:00:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2026360705, format:json, prefix:fs subvolume evict, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:00:55 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-397577304"} : dispatch Feb 1 05:00:55 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-397577304", "format": "json"} : dispatch Feb 1 05:00:55 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-397577304"} : dispatch Feb 1 05:00:55 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-397577304"}]': finished Feb 1 05:00:55 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026360705"} : dispatch Feb 1 05:00:55 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2026360705", "format": "json"} : dispatch Feb 1 05:00:55 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026360705"} : dispatch Feb 1 05:00:55 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2026360705"}]': finished Feb 1 05:00:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e245 e245: 6 total, 6 up, 6 in Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.127 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.128 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.154 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.154 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.155 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.155 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.155 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:00:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 69 KiB/s wr, 89 op/s Feb 1 05:00:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e246 e246: 6 total, 6 up, 6 in Feb 1 05:00:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:00:56 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3082454826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.623 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:00:56 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1be67b6e-cf94-4bea-af19-93677534e470", "format": "json"}]: dispatch Feb 1 05:00:56 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1be67b6e-cf94-4bea-af19-93677534e470, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:56 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1be67b6e-cf94-4bea-af19-93677534e470, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:00:56 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1be67b6e-cf94-4bea-af19-93677534e470' of type subvolume Feb 1 05:00:56 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:00:56.645+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1be67b6e-cf94-4bea-af19-93677534e470' of type subvolume Feb 1 05:00:56 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1be67b6e-cf94-4bea-af19-93677534e470", "force": true, "format": "json"}]: dispatch Feb 1 05:00:56 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1be67b6e-cf94-4bea-af19-93677534e470, vol_name:cephfs) < "" Feb 1 05:00:56 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1be67b6e-cf94-4bea-af19-93677534e470'' moved to trashcan Feb 1 05:00:56 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:00:56 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1be67b6e-cf94-4bea-af19-93677534e470, vol_name:cephfs) < "" Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.835 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.837 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11539MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.838 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.838 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:00:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e247 e247: 6 total, 6 up, 6 in Feb 1 05:00:56 localhost nova_compute[274317]: 2026-02-01 10:00:56.886 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:57 localhost nova_compute[274317]: 2026-02-01 10:00:57.136 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 05:00:57 localhost nova_compute[274317]: 2026-02-01 10:00:57.137 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 05:00:57 localhost nova_compute[274317]: 2026-02-01 10:00:57.316 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:00:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:00:57 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/767252392' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:00:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:00:57 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/767252392' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:00:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:00:57 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1927343437' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:00:57 localhost nova_compute[274317]: 2026-02-01 10:00:57.790 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:00:57 localhost nova_compute[274317]: 2026-02-01 10:00:57.798 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 05:00:57 localhost nova_compute[274317]: 2026-02-01 10:00:57.821 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 05:00:57 localhost nova_compute[274317]: 2026-02-01 10:00:57.824 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 05:00:57 localhost nova_compute[274317]: 2026-02-01 10:00:57.825 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:00:57 localhost podman[316157]: 2026-02-01 10:00:57.892760649 +0000 UTC m=+0.099653176 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3) Feb 1 05:00:57 localhost podman[316155]: 2026-02-01 10:00:57.96562528 +0000 UTC m=+0.175198251 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, architecture=x86_64, name=ubi9/ubi-minimal, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1769056855, com.redhat.component=ubi9-minimal-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z) Feb 1 05:00:57 localhost podman[316158]: 2026-02-01 10:00:57.988524613 +0000 UTC m=+0.197744533 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 05:00:57 localhost podman[316158]: 2026-02-01 10:00:57.998219836 +0000 UTC m=+0.207439786 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 05:00:58 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:00:58 localhost podman[316155]: 2026-02-01 10:00:58.047767599 +0000 UTC m=+0.257340640 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.7, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=openstack_network_exporter) Feb 1 05:00:58 localhost nova_compute[274317]: 2026-02-01 10:00:58.053 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:00:58 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:00:58 localhost podman[316156]: 2026-02-01 10:00:58.090726229 +0000 UTC m=+0.299633039 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:00:58 localhost podman[316157]: 2026-02-01 10:00:58.099316897 +0000 UTC m=+0.306209424 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 05:00:58 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:00:58 localhost podman[316156]: 2026-02-01 10:00:58.125853523 +0000 UTC m=+0.334760343 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:00:58 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:00:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 140 KiB/s wr, 107 op/s Feb 1 05:00:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ef5904d0-6de5-446a-a091-edb3ad7abb31", "auth_id": "Joe", "format": "json"}]: dispatch Feb 1 05:00:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:00:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Feb 1 05:00:58 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 1 05:00:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Feb 1 05:00:58 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Feb 1 05:00:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:00:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ef5904d0-6de5-446a-a091-edb3ad7abb31", "auth_id": "Joe", "format": "json"}]: dispatch Feb 1 05:00:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:00:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31/a9d003d6-2a9d-4a04-919b-f3994828d27e Feb 1 05:00:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:00:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:00:58 localhost nova_compute[274317]: 2026-02-01 10:00:58.797 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:58 localhost nova_compute[274317]: 2026-02-01 10:00:58.798 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:58 localhost nova_compute[274317]: 2026-02-01 10:00:58.798 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:58 localhost nova_compute[274317]: 2026-02-01 10:00:58.799 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 05:00:59 localhost nova_compute[274317]: 2026-02-01 10:00:59.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:00:59 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Feb 1 05:00:59 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 1 05:00:59 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Feb 1 05:00:59 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Feb 1 05:00:59 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e248 e248: 6 total, 6 up, 6 in Feb 1 05:00:59 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dad20548-fd8b-498e-8859-9201c657d5e6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:00:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dad20548-fd8b-498e-8859-9201c657d5e6, vol_name:cephfs) < "" Feb 1 05:00:59 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dad20548-fd8b-498e-8859-9201c657d5e6/.meta.tmp' Feb 1 05:00:59 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dad20548-fd8b-498e-8859-9201c657d5e6/.meta.tmp' to config b'/volumes/_nogroup/dad20548-fd8b-498e-8859-9201c657d5e6/.meta' Feb 1 05:00:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dad20548-fd8b-498e-8859-9201c657d5e6, vol_name:cephfs) < "" Feb 1 05:01:00 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dad20548-fd8b-498e-8859-9201c657d5e6", "format": "json"}]: dispatch Feb 1 05:01:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dad20548-fd8b-498e-8859-9201c657d5e6, vol_name:cephfs) < "" Feb 1 05:01:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dad20548-fd8b-498e-8859-9201c657d5e6, vol_name:cephfs) < "" Feb 1 05:01:00 localhost podman[236852]: time="2026-02-01T10:01:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:01:00 localhost podman[236852]: @ - - [01/Feb/2026:10:01:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:01:00 localhost podman[236852]: @ - - [01/Feb/2026:10:01:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18299 "" "Go-http-client/1.1" Feb 1 05:01:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:00 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "361f7103-8230-4c8b-80e5-41b9d7dd022d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:361f7103-8230-4c8b-80e5-41b9d7dd022d, vol_name:cephfs) < "" Feb 1 05:01:00 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/361f7103-8230-4c8b-80e5-41b9d7dd022d/.meta.tmp' Feb 1 05:01:00 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/361f7103-8230-4c8b-80e5-41b9d7dd022d/.meta.tmp' to config b'/volumes/_nogroup/361f7103-8230-4c8b-80e5-41b9d7dd022d/.meta' Feb 1 05:01:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:361f7103-8230-4c8b-80e5-41b9d7dd022d, vol_name:cephfs) < "" Feb 1 05:01:00 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "361f7103-8230-4c8b-80e5-41b9d7dd022d", "format": "json"}]: dispatch Feb 1 05:01:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:361f7103-8230-4c8b-80e5-41b9d7dd022d, vol_name:cephfs) < "" Feb 1 05:01:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:361f7103-8230-4c8b-80e5-41b9d7dd022d, vol_name:cephfs) < "" Feb 1 05:01:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 143 KiB/s wr, 109 op/s Feb 1 05:01:01 localhost openstack_network_exporter[239388]: ERROR 10:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:01:01 localhost openstack_network_exporter[239388]: Feb 1 05:01:01 localhost openstack_network_exporter[239388]: ERROR 10:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:01:01 localhost openstack_network_exporter[239388]: Feb 1 05:01:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e249 e249: 6 total, 6 up, 6 in Feb 1 05:01:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "auth_id": "admin", "tenant_id": "d32ed6e558674454a1648ebe57d1a805", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:01:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, tenant_id:d32ed6e558674454a1648ebe57d1a805, vol_name:cephfs) < "" Feb 1 05:01:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) Feb 1 05:01:01 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Feb 1 05:01:01 localhost ceph-mgr[278126]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify Feb 1 05:01:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, tenant_id:d32ed6e558674454a1648ebe57d1a805, vol_name:cephfs) < "" Feb 1 05:01:01 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:01.823+0000 7f93ec23e640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Feb 1 05:01:01 localhost ceph-mgr[278126]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Feb 1 05:01:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e250 e250: 6 total, 6 up, 6 in Feb 1 05:01:01 localhost nova_compute[274317]: 2026-02-01 10:01:01.928 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 127 KiB/s wr, 97 op/s Feb 1 05:01:02 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Feb 1 05:01:03 localhost nova_compute[274317]: 2026-02-01 10:01:03.069 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:03 localhost nova_compute[274317]: 2026-02-01 10:01:03.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:03 localhost nova_compute[274317]: 2026-02-01 10:01:03.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 1 05:01:03 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dad20548-fd8b-498e-8859-9201c657d5e6", "format": "json"}]: dispatch Feb 1 05:01:03 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dad20548-fd8b-498e-8859-9201c657d5e6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:03 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dad20548-fd8b-498e-8859-9201c657d5e6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:03 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:03.225+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dad20548-fd8b-498e-8859-9201c657d5e6' of type subvolume Feb 1 05:01:03 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dad20548-fd8b-498e-8859-9201c657d5e6' of type subvolume Feb 1 05:01:03 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dad20548-fd8b-498e-8859-9201c657d5e6", "force": true, "format": "json"}]: dispatch Feb 1 05:01:03 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dad20548-fd8b-498e-8859-9201c657d5e6, vol_name:cephfs) < "" Feb 1 05:01:03 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dad20548-fd8b-498e-8859-9201c657d5e6'' moved to trashcan Feb 1 05:01:03 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:03 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dad20548-fd8b-498e-8859-9201c657d5e6, vol_name:cephfs) < "" Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:01:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:01:04 localhost nova_compute[274317]: 2026-02-01 10:01:04.115 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 48 KiB/s wr, 94 op/s Feb 1 05:01:05 localhost nova_compute[274317]: 2026-02-01 10:01:05.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:05 localhost nova_compute[274317]: 2026-02-01 10:01:05.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 1 05:01:05 localhost nova_compute[274317]: 2026-02-01 10:01:05.118 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 1 05:01:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "361f7103-8230-4c8b-80e5-41b9d7dd022d", "format": "json"}]: dispatch Feb 1 05:01:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:361f7103-8230-4c8b-80e5-41b9d7dd022d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:361f7103-8230-4c8b-80e5-41b9d7dd022d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:05 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:05.129+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '361f7103-8230-4c8b-80e5-41b9d7dd022d' of type subvolume Feb 1 05:01:05 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '361f7103-8230-4c8b-80e5-41b9d7dd022d' of type subvolume Feb 1 05:01:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "361f7103-8230-4c8b-80e5-41b9d7dd022d", "force": true, "format": "json"}]: dispatch Feb 1 05:01:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:361f7103-8230-4c8b-80e5-41b9d7dd022d, vol_name:cephfs) < "" Feb 1 05:01:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/361f7103-8230-4c8b-80e5-41b9d7dd022d'' moved to trashcan Feb 1 05:01:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:361f7103-8230-4c8b-80e5-41b9d7dd022d, vol_name:cephfs) < "" Feb 1 05:01:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "auth_id": "david", "tenant_id": "d32ed6e558674454a1648ebe57d1a805", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:01:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, tenant_id:d32ed6e558674454a1648ebe57d1a805, vol_name:cephfs) < "" Feb 1 05:01:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Feb 1 05:01:05 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 1 05:01:05 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID david with tenant d32ed6e558674454a1648ebe57d1a805 Feb 1 05:01:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f/28cffc9e-f368-4287-bbca-51fb2339a1c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:01:05 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f/28cffc9e-f368-4287-bbca-51fb2339a1c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:01:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, tenant_id:d32ed6e558674454a1648ebe57d1a805, vol_name:cephfs) < "" Feb 1 05:01:06 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 1 05:01:06 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f/28cffc9e-f368-4287-bbca-51fb2339a1c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:01:06 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f/28cffc9e-f368-4287-bbca-51fb2339a1c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:01:06 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f/28cffc9e-f368-4287-bbca-51fb2339a1c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:01:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 43 KiB/s wr, 83 op/s Feb 1 05:01:06 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "334d218b-dfca-41dc-92c9-1bb7ec15d360", "format": "json"}]: dispatch Feb 1 05:01:06 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:334d218b-dfca-41dc-92c9-1bb7ec15d360, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:06 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:334d218b-dfca-41dc-92c9-1bb7ec15d360, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:06 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8cb94272-9b85-4a3b-a318-d4ded9f25bee", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:06 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8cb94272-9b85-4a3b-a318-d4ded9f25bee, vol_name:cephfs) < "" Feb 1 05:01:06 localhost nova_compute[274317]: 2026-02-01 10:01:06.970 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:07 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8cb94272-9b85-4a3b-a318-d4ded9f25bee/.meta.tmp' Feb 1 05:01:07 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8cb94272-9b85-4a3b-a318-d4ded9f25bee/.meta.tmp' to config b'/volumes/_nogroup/8cb94272-9b85-4a3b-a318-d4ded9f25bee/.meta' Feb 1 05:01:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8cb94272-9b85-4a3b-a318-d4ded9f25bee, vol_name:cephfs) < "" Feb 1 05:01:07 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8cb94272-9b85-4a3b-a318-d4ded9f25bee", "format": "json"}]: dispatch Feb 1 05:01:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8cb94272-9b85-4a3b-a318-d4ded9f25bee, vol_name:cephfs) < "" Feb 1 05:01:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8cb94272-9b85-4a3b-a318-d4ded9f25bee, vol_name:cephfs) < "" Feb 1 05:01:08 localhost nova_compute[274317]: 2026-02-01 10:01:08.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:08 localhost nova_compute[274317]: 2026-02-01 10:01:08.104 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 452 MiB data, 1.7 GiB used, 40 GiB / 42 GiB avail; 121 KiB/s rd, 32 MiB/s wr, 190 op/s Feb 1 05:01:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1ec6001-c4f0-42e8-a3ae-66c185a36061", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1ec6001-c4f0-42e8-a3ae-66c185a36061/.meta.tmp' Feb 1 05:01:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1ec6001-c4f0-42e8-a3ae-66c185a36061/.meta.tmp' to config b'/volumes/_nogroup/c1ec6001-c4f0-42e8-a3ae-66c185a36061/.meta' Feb 1 05:01:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1ec6001-c4f0-42e8-a3ae-66c185a36061", "format": "json"}]: dispatch Feb 1 05:01:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:01:08 localhost systemd[1]: tmp-crun.Z8Xvik.mount: Deactivated successfully. Feb 1 05:01:08 localhost podman[316248]: 2026-02-01 10:01:08.884204215 +0000 UTC m=+0.094255579 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 05:01:08 localhost podman[316248]: 2026-02-01 10:01:08.89561181 +0000 UTC m=+0.105663134 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Feb 1 05:01:08 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:01:09 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e251 e251: 6 total, 6 up, 6 in Feb 1 05:01:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 452 MiB data, 1.7 GiB used, 40 GiB / 42 GiB avail; 114 KiB/s rd, 30 MiB/s wr, 179 op/s Feb 1 05:01:10 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "334d218b-dfca-41dc-92c9-1bb7ec15d360_74b9c1d7-8ee2-4566-b420-d2cfb354ff64", "force": true, "format": "json"}]: dispatch Feb 1 05:01:10 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:334d218b-dfca-41dc-92c9-1bb7ec15d360_74b9c1d7-8ee2-4566-b420-d2cfb354ff64, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:10 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:10 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:10 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:334d218b-dfca-41dc-92c9-1bb7ec15d360_74b9c1d7-8ee2-4566-b420-d2cfb354ff64, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:10 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "334d218b-dfca-41dc-92c9-1bb7ec15d360", "force": true, "format": "json"}]: dispatch Feb 1 05:01:10 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:334d218b-dfca-41dc-92c9-1bb7ec15d360, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:10 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:10 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:334d218b-dfca-41dc-92c9-1bb7ec15d360, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:11 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8cb94272-9b85-4a3b-a318-d4ded9f25bee", "format": "json"}]: dispatch Feb 1 05:01:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8cb94272-9b85-4a3b-a318-d4ded9f25bee, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8cb94272-9b85-4a3b-a318-d4ded9f25bee, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:11 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8cb94272-9b85-4a3b-a318-d4ded9f25bee' of type subvolume Feb 1 05:01:11 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:11.766+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8cb94272-9b85-4a3b-a318-d4ded9f25bee' of type subvolume Feb 1 05:01:11 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8cb94272-9b85-4a3b-a318-d4ded9f25bee", "force": true, "format": "json"}]: dispatch Feb 1 05:01:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8cb94272-9b85-4a3b-a318-d4ded9f25bee, vol_name:cephfs) < "" Feb 1 05:01:11 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8cb94272-9b85-4a3b-a318-d4ded9f25bee'' moved to trashcan Feb 1 05:01:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:01:11 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8cb94272-9b85-4a3b-a318-d4ded9f25bee, vol_name:cephfs) < "" Feb 1 05:01:11 localhost systemd[1]: tmp-crun.9KAhie.mount: Deactivated successfully. Feb 1 05:01:11 localhost podman[316267]: 2026-02-01 10:01:11.882352404 +0000 UTC m=+0.088727517 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 05:01:11 localhost podman[316267]: 2026-02-01 10:01:11.891719555 +0000 UTC m=+0.098094628 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:01:11 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:01:11 localhost nova_compute[274317]: 2026-02-01 10:01:11.971 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:12 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c1ec6001-c4f0-42e8-a3ae-66c185a36061", "auth_id": "david", "tenant_id": "f62ab07d2055417db4484bccb101ac2e", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:01:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, tenant_id:f62ab07d2055417db4484bccb101ac2e, vol_name:cephfs) < "" Feb 1 05:01:12 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Feb 1 05:01:12 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 1 05:01:12 localhost ceph-mgr[278126]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use Feb 1 05:01:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, tenant_id:f62ab07d2055417db4484bccb101ac2e, vol_name:cephfs) < "" Feb 1 05:01:12 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:12.022+0000 7f93ec23e640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Feb 1 05:01:12 localhost ceph-mgr[278126]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Feb 1 05:01:12 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 1 05:01:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 452 MiB data, 1.7 GiB used, 40 GiB / 42 GiB avail; 97 KiB/s rd, 26 MiB/s wr, 152 op/s Feb 1 05:01:13 localhost nova_compute[274317]: 2026-02-01 10:01:13.148 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 881 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 127 KiB/s rd, 68 MiB/s wr, 215 op/s Feb 1 05:01:14 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e252 e252: 6 total, 6 up, 6 in Feb 1 05:01:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c1ec6001-c4f0-42e8-a3ae-66c185a36061", "auth_id": "david", "format": "json"}]: dispatch Feb 1 05:01:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:15 localhost ceph-mgr[278126]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume 'c1ec6001-c4f0-42e8-a3ae-66c185a36061' Feb 1 05:01:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c1ec6001-c4f0-42e8-a3ae-66c185a36061", "auth_id": "david", "format": "json"}]: dispatch Feb 1 05:01:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/c1ec6001-c4f0-42e8-a3ae-66c185a36061/f886650d-b51c-4718-ba32-df2300a26036 Feb 1 05:01:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:01:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 881 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 87 KiB/s rd, 54 MiB/s wr, 150 op/s Feb 1 05:01:16 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "0463ec36-25b3-41dc-9d07-408b582c340a", "format": "json"}]: dispatch Feb 1 05:01:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0463ec36-25b3-41dc-9d07-408b582c340a, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0463ec36-25b3-41dc-9d07-408b582c340a, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e253 e253: 6 total, 6 up, 6 in Feb 1 05:01:16 localhost nova_compute[274317]: 2026-02-01 10:01:16.974 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:18 localhost nova_compute[274317]: 2026-02-01 10:01:18.190 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 171 KiB/s rd, 96 MiB/s wr, 286 op/s Feb 1 05:01:18 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "auth_id": "david", "format": "json"}]: dispatch Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:01:18 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Feb 1 05:01:18 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 1 05:01:18 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Feb 1 05:01:18 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:01:18 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "auth_id": "david", "format": "json"}]: dispatch Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f/28cffc9e-f368-4287-bbca-51fb2339a1c0 Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:01:18 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "90d0a515-163b-4cd0-9158-f05911007a1a", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:90d0a515-163b-4cd0-9158-f05911007a1a, vol_name:cephfs) < "" Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/90d0a515-163b-4cd0-9158-f05911007a1a/.meta.tmp' Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/90d0a515-163b-4cd0-9158-f05911007a1a/.meta.tmp' to config b'/volumes/_nogroup/90d0a515-163b-4cd0-9158-f05911007a1a/.meta' Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:90d0a515-163b-4cd0-9158-f05911007a1a, vol_name:cephfs) < "" Feb 1 05:01:18 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "90d0a515-163b-4cd0-9158-f05911007a1a", "format": "json"}]: dispatch Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90d0a515-163b-4cd0-9158-f05911007a1a, vol_name:cephfs) < "" Feb 1 05:01:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90d0a515-163b-4cd0-9158-f05911007a1a, vol_name:cephfs) < "" Feb 1 05:01:19 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Feb 1 05:01:19 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 1 05:01:19 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Feb 1 05:01:19 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Feb 1 05:01:19 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e254 e254: 6 total, 6 up, 6 in Feb 1 05:01:20 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "204f122a-bf12-4f0e-934c-5da070005009", "format": "json"}]: dispatch Feb 1 05:01:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:204f122a-bf12-4f0e-934c-5da070005009, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:204f122a-bf12-4f0e-934c-5da070005009, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 112 KiB/s rd, 57 MiB/s wr, 181 op/s Feb 1 05:01:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_10:01:21 Feb 1 05:01:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 05:01:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 05:01:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'volumes', 'manila_metadata', 'manila_data', 'images'] Feb 1 05:01:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 05:01:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:01:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:01:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:01:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:01:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:01:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:01:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e255 e255: 6 total, 6 up, 6 in Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014874720826353993 of space, bias 1.0, pg target 0.2969985924995347 quantized to 32 (current 32) Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.07146954390005583 of space, bias 1.0, pg target 14.222439236111109 quantized to 32 (current 32) Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.453674623115578e-06 of space, bias 1.0, pg target 0.0004539298052763819 quantized to 32 (current 32) Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:01:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0003500575795644891 of space, bias 4.0, pg target 0.25904260887772196 quantized to 16 (current 16) Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:01:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:01:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e256 e256: 6 total, 6 up, 6 in Feb 1 05:01:21 localhost nova_compute[274317]: 2026-02-01 10:01:21.978 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "49b736f9-feaf-4b7f-9d80-10ecfc8b132a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a/.meta.tmp' Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a/.meta.tmp' to config b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a/.meta' Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "49b736f9-feaf-4b7f-9d80-10ecfc8b132a", "format": "json"}]: dispatch Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 122 KiB/s rd, 62 MiB/s wr, 198 op/s Feb 1 05:01:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "62834cfd-0e6e-4a9a-8e7a-94535f5d68c6", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:62834cfd-0e6e-4a9a-8e7a-94535f5d68c6, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/62834cfd-0e6e-4a9a-8e7a-94535f5d68c6/.meta.tmp' Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/62834cfd-0e6e-4a9a-8e7a-94535f5d68c6/.meta.tmp' to config b'/volumes/_nogroup/62834cfd-0e6e-4a9a-8e7a-94535f5d68c6/.meta' Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:62834cfd-0e6e-4a9a-8e7a-94535f5d68c6, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "62834cfd-0e6e-4a9a-8e7a-94535f5d68c6", "format": "json"}]: dispatch Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62834cfd-0e6e-4a9a-8e7a-94535f5d68c6, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62834cfd-0e6e-4a9a-8e7a-94535f5d68c6, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1ec6001-c4f0-42e8-a3ae-66c185a36061", "format": "json"}]: dispatch Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:22.668+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1ec6001-c4f0-42e8-a3ae-66c185a36061' of type subvolume Feb 1 05:01:22 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1ec6001-c4f0-42e8-a3ae-66c185a36061' of type subvolume Feb 1 05:01:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1ec6001-c4f0-42e8-a3ae-66c185a36061", "force": true, "format": "json"}]: dispatch Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c1ec6001-c4f0-42e8-a3ae-66c185a36061'' moved to trashcan Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1ec6001-c4f0-42e8-a3ae-66c185a36061, vol_name:cephfs) < "" Feb 1 05:01:22 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e257 e257: 6 total, 6 up, 6 in Feb 1 05:01:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:01:23 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2454881203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:01:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:01:23 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2454881203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:01:23 localhost nova_compute[274317]: 2026-02-01 10:01:23.223 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:23 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "8a7eb109-e3b5-473d-add7-9e1ee2a73cfe", "format": "json"}]: dispatch Feb 1 05:01:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8a7eb109-e3b5-473d-add7-9e1ee2a73cfe, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8a7eb109-e3b5-473d-add7-9e1ee2a73cfe, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 76 KiB/s wr, 180 op/s Feb 1 05:01:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "49b736f9-feaf-4b7f-9d80-10ecfc8b132a", "snap_name": "e6f47d41-176f-47be-9b1d-71861fa50733", "format": "json"}]: dispatch Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e6f47d41-176f-47be-9b1d-71861fa50733, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e6f47d41-176f-47be-9b1d-71861fa50733, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e258 e258: 6 total, 6 up, 6 in Feb 1 05:01:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "format": "json"}]: dispatch Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:25 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:25.610+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ebeb9c1e-187e-4fbb-8711-dc250e4ab635' of type subvolume Feb 1 05:01:25 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ebeb9c1e-187e-4fbb-8711-dc250e4ab635' of type subvolume Feb 1 05:01:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ebeb9c1e-187e-4fbb-8711-dc250e4ab635", "force": true, "format": "json"}]: dispatch Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ebeb9c1e-187e-4fbb-8711-dc250e4ab635'' moved to trashcan Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ebeb9c1e-187e-4fbb-8711-dc250e4ab635, vol_name:cephfs) < "" Feb 1 05:01:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "90d0a515-163b-4cd0-9158-f05911007a1a", "format": "json"}]: dispatch Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:90d0a515-163b-4cd0-9158-f05911007a1a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:90d0a515-163b-4cd0-9158-f05911007a1a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:25 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:25.768+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90d0a515-163b-4cd0-9158-f05911007a1a' of type subvolume Feb 1 05:01:25 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90d0a515-163b-4cd0-9158-f05911007a1a' of type subvolume Feb 1 05:01:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "90d0a515-163b-4cd0-9158-f05911007a1a", "force": true, "format": "json"}]: dispatch Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90d0a515-163b-4cd0-9158-f05911007a1a, vol_name:cephfs) < "" Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/90d0a515-163b-4cd0-9158-f05911007a1a'' moved to trashcan Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90d0a515-163b-4cd0-9158-f05911007a1a, vol_name:cephfs) < "" Feb 1 05:01:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 76 KiB/s wr, 182 op/s Feb 1 05:01:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e259 e259: 6 total, 6 up, 6 in Feb 1 05:01:26 localhost nova_compute[274317]: 2026-02-01 10:01:26.980 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:27 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "77f0dc61-dc6e-408c-89f4-a2dacf94e1df", "format": "json"}]: dispatch Feb 1 05:01:27 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:77f0dc61-dc6e-408c-89f4-a2dacf94e1df, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:27 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:77f0dc61-dc6e-408c-89f4-a2dacf94e1df, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:28 localhost nova_compute[274317]: 2026-02-01 10:01:28.264 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 145 KiB/s rd, 130 KiB/s wr, 253 op/s Feb 1 05:01:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "41d26af5-3d45-417d-aa39-08707e23e8c3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3/.meta.tmp' Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3/.meta.tmp' to config b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3/.meta' Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "41d26af5-3d45-417d-aa39-08707e23e8c3", "format": "json"}]: dispatch Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:01:28 localhost podman[316293]: 2026-02-01 10:01:28.882774313 +0000 UTC m=+0.087743166 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 1 05:01:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ef5904d0-6de5-446a-a091-edb3ad7abb31", "format": "json"}]: dispatch Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:28 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:28.892+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef5904d0-6de5-446a-a091-edb3ad7abb31' of type subvolume Feb 1 05:01:28 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef5904d0-6de5-446a-a091-edb3ad7abb31' of type subvolume Feb 1 05:01:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ef5904d0-6de5-446a-a091-edb3ad7abb31", "force": true, "format": "json"}]: dispatch Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ef5904d0-6de5-446a-a091-edb3ad7abb31'' moved to trashcan Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef5904d0-6de5-446a-a091-edb3ad7abb31, vol_name:cephfs) < "" Feb 1 05:01:28 localhost podman[316292]: 2026-02-01 10:01:28.938058346 +0000 UTC m=+0.146889409 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1769056855, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git) Feb 1 05:01:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "62834cfd-0e6e-4a9a-8e7a-94535f5d68c6", "format": "json"}]: dispatch Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:62834cfd-0e6e-4a9a-8e7a-94535f5d68c6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:62834cfd-0e6e-4a9a-8e7a-94535f5d68c6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:28 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:28.964+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62834cfd-0e6e-4a9a-8e7a-94535f5d68c6' of type subvolume Feb 1 05:01:28 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62834cfd-0e6e-4a9a-8e7a-94535f5d68c6' of type subvolume Feb 1 05:01:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "62834cfd-0e6e-4a9a-8e7a-94535f5d68c6", "force": true, "format": "json"}]: dispatch Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62834cfd-0e6e-4a9a-8e7a-94535f5d68c6, vol_name:cephfs) < "" Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/62834cfd-0e6e-4a9a-8e7a-94535f5d68c6'' moved to trashcan Feb 1 05:01:28 localhost podman[316294]: 2026-02-01 10:01:28.989531741 +0000 UTC m=+0.191384227 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62834cfd-0e6e-4a9a-8e7a-94535f5d68c6, vol_name:cephfs) < "" Feb 1 05:01:29 localhost podman[316293]: 2026-02-01 10:01:29.00203993 +0000 UTC m=+0.207008773 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 05:01:29 localhost podman[316292]: 2026-02-01 10:01:29.011845166 +0000 UTC m=+0.220676229 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7) Feb 1 05:01:29 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:01:29 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:01:29 localhost podman[316300]: 2026-02-01 10:01:29.087139842 +0000 UTC m=+0.284323582 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 1 05:01:29 localhost podman[316300]: 2026-02-01 10:01:29.096090802 +0000 UTC m=+0.293274582 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 05:01:29 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:01:29 localhost podman[316294]: 2026-02-01 10:01:29.117913841 +0000 UTC m=+0.319766307 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 1 05:01:29 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:01:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "49b736f9-feaf-4b7f-9d80-10ecfc8b132a", "snap_name": "e6f47d41-176f-47be-9b1d-71861fa50733_1bedbdd0-9afe-4342-8012-179fbaf0a969", "force": true, "format": "json"}]: dispatch Feb 1 05:01:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e6f47d41-176f-47be-9b1d-71861fa50733_1bedbdd0-9afe-4342-8012-179fbaf0a969, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a/.meta.tmp' Feb 1 05:01:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a/.meta.tmp' to config b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a/.meta' Feb 1 05:01:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e6f47d41-176f-47be-9b1d-71861fa50733_1bedbdd0-9afe-4342-8012-179fbaf0a969, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "49b736f9-feaf-4b7f-9d80-10ecfc8b132a", "snap_name": "e6f47d41-176f-47be-9b1d-71861fa50733", "force": true, "format": "json"}]: dispatch Feb 1 05:01:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e6f47d41-176f-47be-9b1d-71861fa50733, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a/.meta.tmp' Feb 1 05:01:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a/.meta.tmp' to config b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a/.meta' Feb 1 05:01:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e6f47d41-176f-47be-9b1d-71861fa50733, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e260 e260: 6 total, 6 up, 6 in Feb 1 05:01:30 localhost podman[236852]: time="2026-02-01T10:01:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:01:30 localhost podman[236852]: @ - - [01/Feb/2026:10:01:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:01:30 localhost podman[236852]: @ - - [01/Feb/2026:10:01:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18316 "" "Go-http-client/1.1" Feb 1 05:01:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 69 KiB/s wr, 107 op/s Feb 1 05:01:30 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "28ed57d1-d4dd-4eee-89e7-1676bfca130b", "format": "json"}]: dispatch Feb 1 05:01:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:28ed57d1-d4dd-4eee-89e7-1676bfca130b, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:30 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:28ed57d1-d4dd-4eee-89e7-1676bfca130b, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:31 localhost openstack_network_exporter[239388]: ERROR 10:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:01:31 localhost openstack_network_exporter[239388]: Feb 1 05:01:31 localhost openstack_network_exporter[239388]: ERROR 10:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:01:31 localhost openstack_network_exporter[239388]: Feb 1 05:01:31 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "41d26af5-3d45-417d-aa39-08707e23e8c3", "snap_name": "5e09fab8-143b-4edc-a899-7a58c4eb5f0b", "format": "json"}]: dispatch Feb 1 05:01:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5e09fab8-143b-4edc-a899-7a58c4eb5f0b, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5e09fab8-143b-4edc-a899-7a58c4eb5f0b, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e261 e261: 6 total, 6 up, 6 in Feb 1 05:01:31 localhost nova_compute[274317]: 2026-02-01 10:01:31.983 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:32 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "auth_id": "admin", "format": "json"}]: dispatch Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:32.332+0000 7f93ec23e640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Feb 1 05:01:32 localhost ceph-mgr[278126]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Feb 1 05:01:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 69 KiB/s wr, 107 op/s Feb 1 05:01:32 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "format": "json"}]: dispatch Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:32.481+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f' of type subvolume Feb 1 05:01:32 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f' of type subvolume Feb 1 05:01:32 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f", "force": true, "format": "json"}]: dispatch Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f'' moved to trashcan Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ebb94a5c-b1fe-4a17-9f7d-7893b96d0e3f, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "49b736f9-feaf-4b7f-9d80-10ecfc8b132a", "format": "json"}]: dispatch Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:32.727+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '49b736f9-feaf-4b7f-9d80-10ecfc8b132a' of type subvolume Feb 1 05:01:32 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '49b736f9-feaf-4b7f-9d80-10ecfc8b132a' of type subvolume Feb 1 05:01:32 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "49b736f9-feaf-4b7f-9d80-10ecfc8b132a", "force": true, "format": "json"}]: dispatch Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/49b736f9-feaf-4b7f-9d80-10ecfc8b132a'' moved to trashcan Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:49b736f9-feaf-4b7f-9d80-10ecfc8b132a, vol_name:cephfs) < "" Feb 1 05:01:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e262 e262: 6 total, 6 up, 6 in Feb 1 05:01:33 localhost nova_compute[274317]: 2026-02-01 10:01:33.305 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e263 e263: 6 total, 6 up, 6 in Feb 1 05:01:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "28ed57d1-d4dd-4eee-89e7-1676bfca130b_517eb85e-a959-4dd5-be18-fb3a9c2f1228", "force": true, "format": "json"}]: dispatch Feb 1 05:01:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28ed57d1-d4dd-4eee-89e7-1676bfca130b_517eb85e-a959-4dd5-be18-fb3a9c2f1228, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28ed57d1-d4dd-4eee-89e7-1676bfca130b_517eb85e-a959-4dd5-be18-fb3a9c2f1228, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "28ed57d1-d4dd-4eee-89e7-1676bfca130b", "force": true, "format": "json"}]: dispatch Feb 1 05:01:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28ed57d1-d4dd-4eee-89e7-1676bfca130b, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:34 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28ed57d1-d4dd-4eee-89e7-1676bfca130b, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 105 KiB/s wr, 148 op/s Feb 1 05:01:35 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "41d26af5-3d45-417d-aa39-08707e23e8c3", "snap_name": "5e09fab8-143b-4edc-a899-7a58c4eb5f0b_efe3ba74-42c3-4ea0-9c14-8d6107d85424", "force": true, "format": "json"}]: dispatch Feb 1 05:01:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5e09fab8-143b-4edc-a899-7a58c4eb5f0b_efe3ba74-42c3-4ea0-9c14-8d6107d85424, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3/.meta.tmp' Feb 1 05:01:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3/.meta.tmp' to config b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3/.meta' Feb 1 05:01:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5e09fab8-143b-4edc-a899-7a58c4eb5f0b_efe3ba74-42c3-4ea0-9c14-8d6107d85424, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:35 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "41d26af5-3d45-417d-aa39-08707e23e8c3", "snap_name": "5e09fab8-143b-4edc-a899-7a58c4eb5f0b", "force": true, "format": "json"}]: dispatch Feb 1 05:01:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5e09fab8-143b-4edc-a899-7a58c4eb5f0b, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3/.meta.tmp' Feb 1 05:01:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3/.meta.tmp' to config b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3/.meta' Feb 1 05:01:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5e09fab8-143b-4edc-a899-7a58c4eb5f0b, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:35 localhost ovn_controller[152787]: 2026-02-01T10:01:35Z|00249|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Feb 1 05:01:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 77 KiB/s wr, 109 op/s Feb 1 05:01:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e264 e264: 6 total, 6 up, 6 in Feb 1 05:01:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a, vol_name:cephfs) < "" Feb 1 05:01:36 localhost nova_compute[274317]: 2026-02-01 10:01:36.985 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a/.meta.tmp' Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a/.meta.tmp' to config b'/volumes/_nogroup/87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a/.meta' Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a, vol_name:cephfs) < "" Feb 1 05:01:37 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a", "format": "json"}]: dispatch Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a, vol_name:cephfs) < "" Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a, vol_name:cephfs) < "" Feb 1 05:01:37 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "77f0dc61-dc6e-408c-89f4-a2dacf94e1df_70668b5f-bf0f-414e-9eef-698d029be6a6", "force": true, "format": "json"}]: dispatch Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:77f0dc61-dc6e-408c-89f4-a2dacf94e1df_70668b5f-bf0f-414e-9eef-698d029be6a6, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:77f0dc61-dc6e-408c-89f4-a2dacf94e1df_70668b5f-bf0f-414e-9eef-698d029be6a6, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:37 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "77f0dc61-dc6e-408c-89f4-a2dacf94e1df", "force": true, "format": "json"}]: dispatch Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:77f0dc61-dc6e-408c-89f4-a2dacf94e1df, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:77f0dc61-dc6e-408c-89f4-a2dacf94e1df, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:37 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e265 e265: 6 total, 6 up, 6 in Feb 1 05:01:38 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "41d26af5-3d45-417d-aa39-08707e23e8c3", "format": "json"}]: dispatch Feb 1 05:01:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:41d26af5-3d45-417d-aa39-08707e23e8c3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:41d26af5-3d45-417d-aa39-08707e23e8c3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:38 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:38.261+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '41d26af5-3d45-417d-aa39-08707e23e8c3' of type subvolume Feb 1 05:01:38 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '41d26af5-3d45-417d-aa39-08707e23e8c3' of type subvolume Feb 1 05:01:38 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "41d26af5-3d45-417d-aa39-08707e23e8c3", "force": true, "format": "json"}]: dispatch Feb 1 05:01:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:38 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/41d26af5-3d45-417d-aa39-08707e23e8c3'' moved to trashcan Feb 1 05:01:38 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:41d26af5-3d45-417d-aa39-08707e23e8c3, vol_name:cephfs) < "" Feb 1 05:01:38 localhost nova_compute[274317]: 2026-02-01 10:01:38.335 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 108 KiB/s wr, 97 op/s Feb 1 05:01:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e266 e266: 6 total, 6 up, 6 in Feb 1 05:01:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:01:39 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/626846641' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:01:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:01:39 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/626846641' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:01:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:01:39 localhost podman[316374]: 2026-02-01 10:01:39.865470465 +0000 UTC m=+0.078225099 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true) Feb 1 05:01:39 localhost podman[316374]: 2026-02-01 10:01:39.900427234 +0000 UTC m=+0.113181868 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 05:01:39 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:01:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e267 e267: 6 total, 6 up, 6 in Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.162322) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940100162364, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2621, "num_deletes": 265, "total_data_size": 3790027, "memory_usage": 3851312, "flush_reason": "Manual Compaction"} Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Feb 1 05:01:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940100178329, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 2461212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25798, "largest_seqno": 28414, "table_properties": {"data_size": 2450128, "index_size": 7013, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 27370, "raw_average_key_size": 22, "raw_value_size": 2426588, "raw_average_value_size": 2018, "num_data_blocks": 299, "num_entries": 1202, "num_filter_entries": 1202, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939991, "oldest_key_time": 1769939991, "file_creation_time": 1769940100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 16055 microseconds, and 6282 cpu microseconds. Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.178375) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 2461212 bytes OK Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.178398) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.180368) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.180394) EVENT_LOG_v1 {"time_micros": 1769940100180387, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.180414) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3777511, prev total WAL file size 3777806, number of live WAL files 2. Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.181366) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(2403KB)], [39(20MB)] Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940100181404, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 23916880, "oldest_snapshot_seqno": -1} Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 13772 keys, 22547164 bytes, temperature: kUnknown Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940100295153, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 22547164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 22465632, "index_size": 46039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34437, "raw_key_size": 370026, "raw_average_key_size": 26, "raw_value_size": 22228286, "raw_average_value_size": 1614, "num_data_blocks": 1725, "num_entries": 13772, "num_filter_entries": 13772, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769940100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.295470) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 22547164 bytes Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.348346) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.1 rd, 198.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 20.5 +0.0 blob) out(21.5 +0.0 blob), read-write-amplify(18.9) write-amplify(9.2) OK, records in: 14314, records dropped: 542 output_compression: NoCompression Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.348378) EVENT_LOG_v1 {"time_micros": 1769940100348365, "job": 22, "event": "compaction_finished", "compaction_time_micros": 113821, "compaction_time_cpu_micros": 53254, "output_level": 6, "num_output_files": 1, "total_output_size": 22547164, "num_input_records": 14314, "num_output_records": 13772, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940100348848, "job": 22, "event": "table_file_deletion", "file_number": 41} Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940100351895, "job": 22, "event": "table_file_deletion", "file_number": 39} Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.181265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.352005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.352012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.352016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.352019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:01:40 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:01:40.352022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:01:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 90 KiB/s rd, 147 KiB/s wr, 132 op/s Feb 1 05:01:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "8a7eb109-e3b5-473d-add7-9e1ee2a73cfe_b745ea45-6be9-407d-90cc-c523f0225057", "force": true, "format": "json"}]: dispatch Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8a7eb109-e3b5-473d-add7-9e1ee2a73cfe_b745ea45-6be9-407d-90cc-c523f0225057, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8a7eb109-e3b5-473d-add7-9e1ee2a73cfe_b745ea45-6be9-407d-90cc-c523f0225057, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "8a7eb109-e3b5-473d-add7-9e1ee2a73cfe", "force": true, "format": "json"}]: dispatch Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8a7eb109-e3b5-473d-add7-9e1ee2a73cfe, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8a7eb109-e3b5-473d-add7-9e1ee2a73cfe, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:01:40 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/253343049' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:01:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:01:40 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/253343049' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:01:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a", "format": "json"}]: dispatch Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:40 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:40.839+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a' of type subvolume Feb 1 05:01:40 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a' of type subvolume Feb 1 05:01:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a", "force": true, "format": "json"}]: dispatch Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a, vol_name:cephfs) < "" Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a'' moved to trashcan Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:87aeeb6c-6bc9-4b4f-8e85-f9e7d23d4c9a, vol_name:cephfs) < "" Feb 1 05:01:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "95f0c0d1-c008-4f85-a891-030f31cdce50", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:95f0c0d1-c008-4f85-a891-030f31cdce50, vol_name:cephfs) < "" Feb 1 05:01:41 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/95f0c0d1-c008-4f85-a891-030f31cdce50/.meta.tmp' Feb 1 05:01:41 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/95f0c0d1-c008-4f85-a891-030f31cdce50/.meta.tmp' to config b'/volumes/_nogroup/95f0c0d1-c008-4f85-a891-030f31cdce50/.meta' Feb 1 05:01:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:95f0c0d1-c008-4f85-a891-030f31cdce50, vol_name:cephfs) < "" Feb 1 05:01:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "95f0c0d1-c008-4f85-a891-030f31cdce50", "format": "json"}]: dispatch Feb 1 05:01:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:95f0c0d1-c008-4f85-a891-030f31cdce50, vol_name:cephfs) < "" Feb 1 05:01:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:95f0c0d1-c008-4f85-a891-030f31cdce50, vol_name:cephfs) < "" Feb 1 05:01:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:41.777 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:01:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:41.778 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:01:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:41.778 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:01:41 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e268 e268: 6 total, 6 up, 6 in Feb 1 05:01:41 localhost nova_compute[274317]: 2026-02-01 10:01:41.988 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail Feb 1 05:01:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:01:42 localhost podman[316393]: 2026-02-01 10:01:42.865672416 +0000 UTC m=+0.077133255 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 05:01:42 localhost podman[316393]: 2026-02-01 10:01:42.874864122 +0000 UTC m=+0.086324971 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:01:42 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:01:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e269 e269: 6 total, 6 up, 6 in Feb 1 05:01:43 localhost nova_compute[274317]: 2026-02-01 10:01:43.368 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "204f122a-bf12-4f0e-934c-5da070005009_162585e0-9e26-4972-8197-83daa42ea5eb", "force": true, "format": "json"}]: dispatch Feb 1 05:01:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:204f122a-bf12-4f0e-934c-5da070005009_162585e0-9e26-4972-8197-83daa42ea5eb, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:204f122a-bf12-4f0e-934c-5da070005009_162585e0-9e26-4972-8197-83daa42ea5eb, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "204f122a-bf12-4f0e-934c-5da070005009", "force": true, "format": "json"}]: dispatch Feb 1 05:01:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:204f122a-bf12-4f0e-934c-5da070005009, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:204f122a-bf12-4f0e-934c-5da070005009, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 94 KiB/s wr, 86 op/s Feb 1 05:01:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "95f0c0d1-c008-4f85-a891-030f31cdce50", "format": "json"}]: dispatch Feb 1 05:01:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:95f0c0d1-c008-4f85-a891-030f31cdce50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:95f0c0d1-c008-4f85-a891-030f31cdce50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:44 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:44.940+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '95f0c0d1-c008-4f85-a891-030f31cdce50' of type subvolume Feb 1 05:01:44 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '95f0c0d1-c008-4f85-a891-030f31cdce50' of type subvolume Feb 1 05:01:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "95f0c0d1-c008-4f85-a891-030f31cdce50", "force": true, "format": "json"}]: dispatch Feb 1 05:01:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:95f0c0d1-c008-4f85-a891-030f31cdce50, vol_name:cephfs) < "" Feb 1 05:01:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/95f0c0d1-c008-4f85-a891-030f31cdce50'' moved to trashcan Feb 1 05:01:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:95f0c0d1-c008-4f85-a891-030f31cdce50, vol_name:cephfs) < "" Feb 1 05:01:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 05:01:45 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 05:01:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 05:01:45 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:01:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 05:01:45 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 2e901022-96b3-45db-8af4-9f332f38248c (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:01:45 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 2e901022-96b3-45db-8af4-9f332f38248c (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:01:45 localhost ceph-mgr[278126]: [progress INFO root] Completed event 2e901022-96b3-45db-8af4-9f332f38248c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 05:01:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 05:01:45 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 05:01:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 79 KiB/s wr, 73 op/s Feb 1 05:01:46 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:01:46 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:01:46 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 05:01:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 05:01:46 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "0463ec36-25b3-41dc-9d07-408b582c340a_f121621c-4b0c-4a5e-93bc-9760f71241c7", "force": true, "format": "json"}]: dispatch Feb 1 05:01:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0463ec36-25b3-41dc-9d07-408b582c340a_f121621c-4b0c-4a5e-93bc-9760f71241c7, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e270 e270: 6 total, 6 up, 6 in Feb 1 05:01:46 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:46 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0463ec36-25b3-41dc-9d07-408b582c340a_f121621c-4b0c-4a5e-93bc-9760f71241c7, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:46 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "snap_name": "0463ec36-25b3-41dc-9d07-408b582c340a", "force": true, "format": "json"}]: dispatch Feb 1 05:01:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0463ec36-25b3-41dc-9d07-408b582c340a, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:47 localhost nova_compute[274317]: 2026-02-01 10:01:47.045 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:47 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' Feb 1 05:01:47 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta.tmp' to config b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e/.meta' Feb 1 05:01:47 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0463ec36-25b3-41dc-9d07-408b582c340a, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:47 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:01:47 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e271 e271: 6 total, 6 up, 6 in Feb 1 05:01:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d7c65253-5d6c-4617-9070-0a8b5ac1c2b2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta' Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:01:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d7c65253-5d6c-4617-9070-0a8b5ac1c2b2", "format": "json"}]: dispatch Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:01:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 156 KiB/s wr, 85 op/s Feb 1 05:01:48 localhost nova_compute[274317]: 2026-02-01 10:01:48.406 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0871d823-23d6-4b37-9920-427f6d28d0fb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta' Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:01:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0871d823-23d6-4b37-9920-427f6d28d0fb", "format": "json"}]: dispatch Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:01:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:01:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e272 e272: 6 total, 6 up, 6 in Feb 1 05:01:50 localhost nova_compute[274317]: 2026-02-01 10:01:50.126 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:50 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:50.125 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:01:50 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:50.127 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 05:01:50 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:50.128 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 05:01:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "format": "json"}]: dispatch Feb 1 05:01:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:50 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:50.316+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fde4b04f-3cda-4612-9edf-b8d93a1f6d0e' of type subvolume Feb 1 05:01:50 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fde4b04f-3cda-4612-9edf-b8d93a1f6d0e' of type subvolume Feb 1 05:01:50 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fde4b04f-3cda-4612-9edf-b8d93a1f6d0e", "force": true, "format": "json"}]: dispatch Feb 1 05:01:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fde4b04f-3cda-4612-9edf-b8d93a1f6d0e'' moved to trashcan Feb 1 05:01:50 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:50 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fde4b04f-3cda-4612-9edf-b8d93a1f6d0e, vol_name:cephfs) < "" Feb 1 05:01:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 853 B/s rd, 71 KiB/s wr, 6 op/s Feb 1 05:01:50 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:01:50.429 259225 INFO neutron.agent.linux.ip_lib [None req-41f8917a-dfb6-4337-850d-d6d5540db9cb - - - - - -] Device tap87d2d119-c6 cannot be used as it has no MAC address#033[00m Feb 1 05:01:50 localhost nova_compute[274317]: 2026-02-01 10:01:50.454 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:50 localhost kernel: device tap87d2d119-c6 entered promiscuous mode Feb 1 05:01:50 localhost NetworkManager[5972]: [1769940110.4651] manager: (tap87d2d119-c6): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Feb 1 05:01:50 localhost nova_compute[274317]: 2026-02-01 10:01:50.465 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:50 localhost ovn_controller[152787]: 2026-02-01T10:01:50Z|00250|binding|INFO|Claiming lport 87d2d119-c67b-45d9-ae55-273f194d0fcf for this chassis. Feb 1 05:01:50 localhost ovn_controller[152787]: 2026-02-01T10:01:50Z|00251|binding|INFO|87d2d119-c67b-45d9-ae55-273f194d0fcf: Claiming unknown Feb 1 05:01:50 localhost systemd-udevd[316509]: Network interface NamePolicy= disabled on kernel command line. Feb 1 05:01:50 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:50.480 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02ac6c4d149e42a78d91221782aba2a7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b87d577-bb4d-4fa7-8fd1-8b8b7a56f357, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=87d2d119-c67b-45d9-ae55-273f194d0fcf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:01:50 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:50.482 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 87d2d119-c67b-45d9-ae55-273f194d0fcf in datapath fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36 bound to our chassis#033[00m Feb 1 05:01:50 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:50.484 158655 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 1 05:01:50 localhost ovn_metadata_agent[158650]: 2026-02-01 10:01:50.485 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[33837964-5288-4976-bed5-d7381d56e207]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 05:01:50 localhost journal[224955]: ethtool ioctl error on tap87d2d119-c6: No such device Feb 1 05:01:50 localhost ovn_controller[152787]: 2026-02-01T10:01:50Z|00252|binding|INFO|Setting lport 87d2d119-c67b-45d9-ae55-273f194d0fcf ovn-installed in OVS Feb 1 05:01:50 localhost ovn_controller[152787]: 2026-02-01T10:01:50Z|00253|binding|INFO|Setting lport 87d2d119-c67b-45d9-ae55-273f194d0fcf up in Southbound Feb 1 05:01:50 localhost nova_compute[274317]: 2026-02-01 10:01:50.500 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:50 localhost journal[224955]: ethtool ioctl error on tap87d2d119-c6: No such device Feb 1 05:01:50 localhost journal[224955]: ethtool ioctl error on tap87d2d119-c6: No such device Feb 1 05:01:50 localhost journal[224955]: ethtool ioctl error on tap87d2d119-c6: No such device Feb 1 05:01:50 localhost journal[224955]: ethtool ioctl error on tap87d2d119-c6: No such device Feb 1 05:01:50 localhost journal[224955]: ethtool ioctl error on tap87d2d119-c6: No such device Feb 1 05:01:50 localhost journal[224955]: ethtool ioctl error on tap87d2d119-c6: No such device Feb 1 05:01:50 localhost journal[224955]: ethtool ioctl error on tap87d2d119-c6: No such device Feb 1 05:01:50 localhost nova_compute[274317]: 2026-02-01 10:01:50.540 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:50 localhost nova_compute[274317]: 2026-02-01 10:01:50.574 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:01:51 localhost podman[316580]: Feb 1 05:01:51 localhost podman[316580]: 2026-02-01 10:01:51.492354382 +0000 UTC m=+0.095542229 container create 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:01:51 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d7c65253-5d6c-4617-9070-0a8b5ac1c2b2", "snap_name": "7c62edf1-e706-4a7c-a38f-d41949f0e0ac", "format": "json"}]: dispatch Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7c62edf1-e706-4a7c-a38f-d41949f0e0ac, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:01:51 localhost systemd[1]: Started libpod-conmon-77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0.scope. Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7c62edf1-e706-4a7c-a38f-d41949f0e0ac, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:01:51 localhost podman[316580]: 2026-02-01 10:01:51.446790642 +0000 UTC m=+0.049978519 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 05:01:51 localhost systemd[1]: Started libcrun container. Feb 1 05:01:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab4f9ef39bfd1dad63e076d392e8797c8a2c365aac7cee21797b99cee32b651/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 05:01:51 localhost podman[316580]: 2026-02-01 10:01:51.565327216 +0000 UTC m=+0.168515073 container init 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:01:51 localhost podman[316580]: 2026-02-01 10:01:51.573678426 +0000 UTC m=+0.176866273 container start 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:01:51 localhost dnsmasq[316598]: started, version 2.85 cachesize 150 Feb 1 05:01:51 localhost dnsmasq[316598]: DNS service limited to local subnets Feb 1 05:01:51 localhost dnsmasq[316598]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 05:01:51 localhost dnsmasq[316598]: warning: no upstream servers configured Feb 1 05:01:51 localhost dnsmasq-dhcp[316598]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 05:01:51 localhost dnsmasq[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/addn_hosts - 0 addresses Feb 1 05:01:51 localhost dnsmasq-dhcp[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/host Feb 1 05:01:51 localhost dnsmasq-dhcp[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/opts Feb 1 05:01:51 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0871d823-23d6-4b37-9920-427f6d28d0fb", "snap_name": "399a0ea4-3929-4405-9bc1-c3a475bd2a27", "format": "json"}]: dispatch Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:399a0ea4-3929-4405-9bc1-c3a475bd2a27, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:01:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:399a0ea4-3929-4405-9bc1-c3a475bd2a27, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:01:51 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:01:51.805 259225 INFO neutron.agent.dhcp.agent [None req-814cfd38-6d39-486a-9b27-2873a8122081 - - - - - -] DHCP configuration for ports {'b5df0a2f-b375-49b7-b539-001a87c34d16'} is completed#033[00m Feb 1 05:01:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e273 e273: 6 total, 6 up, 6 in Feb 1 05:01:52 localhost nova_compute[274317]: 2026-02-01 10:01:52.094 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 944 B/s rd, 78 KiB/s wr, 7 op/s Feb 1 05:01:53 localhost nova_compute[274317]: 2026-02-01 10:01:53.443 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:53 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:01:53.721 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T10:01:53Z, description=, device_id=f89734fc-059a-400e-996c-2c8f8ad88e03, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=87b6ab84-336f-4b01-a756-e11432a5bed7, ip_allocation=immediate, mac_address=fa:16:3e:18:1c:30, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T10:01:49Z, description=, dns_domain=, id=fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1492791037-network, port_security_enabled=True, project_id=02ac6c4d149e42a78d91221782aba2a7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25226, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3443, status=ACTIVE, subnets=['fdd813ba-81d9-4f06-b4e4-b6261e5d2046'], tags=[], tenant_id=02ac6c4d149e42a78d91221782aba2a7, updated_at=2026-02-01T10:01:49Z, vlan_transparent=None, network_id=fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, port_security_enabled=False, project_id=02ac6c4d149e42a78d91221782aba2a7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3470, status=DOWN, tags=[], tenant_id=02ac6c4d149e42a78d91221782aba2a7, updated_at=2026-02-01T10:01:53Z on network fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36#033[00m Feb 1 05:01:53 localhost dnsmasq[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/addn_hosts - 1 addresses Feb 1 05:01:53 localhost podman[316617]: 2026-02-01 10:01:53.940628145 +0000 UTC m=+0.060993832 container kill 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 1 05:01:53 localhost dnsmasq-dhcp[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/host Feb 1 05:01:53 localhost dnsmasq-dhcp[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/opts Feb 1 05:01:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:01:54.185 259225 INFO neutron.agent.dhcp.agent [None req-bcf64f2e-2990-498f-ab5c-5c988f5208b3 - - - - - -] DHCP configuration for ports {'87b6ab84-336f-4b01-a756-e11432a5bed7'} is completed#033[00m Feb 1 05:01:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 478 B/s rd, 72 KiB/s wr, 6 op/s Feb 1 05:01:54 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:01:54.599 259225 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-01T10:01:53Z, description=, device_id=f89734fc-059a-400e-996c-2c8f8ad88e03, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=87b6ab84-336f-4b01-a756-e11432a5bed7, ip_allocation=immediate, mac_address=fa:16:3e:18:1c:30, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-01T10:01:49Z, description=, dns_domain=, id=fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1492791037-network, port_security_enabled=True, project_id=02ac6c4d149e42a78d91221782aba2a7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25226, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3443, status=ACTIVE, subnets=['fdd813ba-81d9-4f06-b4e4-b6261e5d2046'], tags=[], tenant_id=02ac6c4d149e42a78d91221782aba2a7, updated_at=2026-02-01T10:01:49Z, vlan_transparent=None, network_id=fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, port_security_enabled=False, project_id=02ac6c4d149e42a78d91221782aba2a7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3470, status=DOWN, tags=[], tenant_id=02ac6c4d149e42a78d91221782aba2a7, updated_at=2026-02-01T10:01:53Z on network fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36#033[00m Feb 1 05:01:54 localhost dnsmasq[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/addn_hosts - 1 addresses Feb 1 05:01:54 localhost dnsmasq-dhcp[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/host Feb 1 05:01:54 localhost podman[316655]: 2026-02-01 10:01:54.811409693 +0000 UTC m=+0.061606861 container kill 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 1 05:01:54 localhost dnsmasq-dhcp[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/opts Feb 1 05:01:54 localhost systemd[1]: tmp-crun.VqGlmT.mount: Deactivated successfully. Feb 1 05:01:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "d7c65253-5d6c-4617-9070-0a8b5ac1c2b2", "snap_name": "7c62edf1-e706-4a7c-a38f-d41949f0e0ac", "target_sub_name": "85bf3dc4-239a-4ea6-b907-935513f36b9b", "format": "json"}]: dispatch Feb 1 05:01:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7c62edf1-e706-4a7c-a38f-d41949f0e0ac, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, target_sub_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, vol_name:cephfs) < "" Feb 1 05:01:55 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:01:55.028 259225 INFO neutron.agent.dhcp.agent [None req-e757e397-3195-4d08-8709-372e72089e4d - - - - - -] DHCP configuration for ports {'87b6ab84-336f-4b01-a756-e11432a5bed7'} is completed#033[00m Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta.tmp' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta.tmp' to config b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 6737350b-dbf1-446c-8a6f-4f1713a6d13c for path b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7c62edf1-e706-4a7c-a38f-d41949f0e0ac, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, target_sub_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, vol_name:cephfs) < "" Feb 1 05:01:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "85bf3dc4-239a-4ea6-b907-935513f36b9b", "format": "json"}]: dispatch Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.105+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.105+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.105+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.105+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.105+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:55 localhost nova_compute[274317]: 2026-02-01 10:01:55.118 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "0871d823-23d6-4b37-9920-427f6d28d0fb", "snap_name": "399a0ea4-3929-4405-9bc1-c3a475bd2a27", "target_sub_name": "9c1c4137-22b3-4b8a-9eaf-875da7fa2508", "format": "json"}]: dispatch Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:399a0ea4-3929-4405-9bc1-c3a475bd2a27, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, target_sub_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, vol_name:cephfs) < "" Feb 1 05:01:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 85bf3dc4-239a-4ea6-b907-935513f36b9b) Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta.tmp' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta.tmp' to config b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.clone_index] tracking-id c629a215-9f83-4c2c-abaf-00ba2a6ad08e for path b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:399a0ea4-3929-4405-9bc1-c3a475bd2a27, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, target_sub_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, vol_name:cephfs) < "" Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.372+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.372+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.372+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.372+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:01:55.372+0000 7f93f0a47640 -1 client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: client.0 error registering admin socket command: (17) File exists Feb 1 05:01:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9c1c4137-22b3-4b8a-9eaf-875da7fa2508", "format": "json"}]: dispatch Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508 Feb 1 05:01:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 9c1c4137-22b3-4b8a-9eaf-875da7fa2508) Feb 1 05:01:55 localhost ovn_controller[152787]: 2026-02-01T10:01:55Z|00254|ovn_bfd|INFO|Enabled BFD on interface ovn-2186fb-0 Feb 1 05:01:55 localhost ovn_controller[152787]: 2026-02-01T10:01:55Z|00255|ovn_bfd|INFO|Enabled BFD on interface ovn-e1cc33-0 Feb 1 05:01:55 localhost ovn_controller[152787]: 2026-02-01T10:01:55Z|00256|ovn_bfd|INFO|Enabled BFD on interface ovn-45aa31-0 Feb 1 05:01:55 localhost nova_compute[274317]: 2026-02-01 10:01:55.476 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:55 localhost nova_compute[274317]: 2026-02-01 10:01:55.496 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:55 localhost nova_compute[274317]: 2026-02-01 10:01:55.500 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:55 localhost nova_compute[274317]: 2026-02-01 10:01:55.508 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:55 localhost nova_compute[274317]: 2026-02-01 10:01:55.583 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:55 localhost nova_compute[274317]: 2026-02-01 10:01:55.592 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:56 localhost nova_compute[274317]: 2026-02-01 10:01:56.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:56 localhost nova_compute[274317]: 2026-02-01 10:01:56.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 05:01:56 localhost nova_compute[274317]: 2026-02-01 10:01:56.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 05:01:56 localhost nova_compute[274317]: 2026-02-01 10:01:56.114 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 05:01:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 58 KiB/s wr, 4 op/s Feb 1 05:01:56 localhost nova_compute[274317]: 2026-02-01 10:01:56.572 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:56 localhost nova_compute[274317]: 2026-02-01 10:01:56.581 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e274 e274: 6 total, 6 up, 6 in Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.096 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.122 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.122 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.123 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.123 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.124 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.209 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:01:57 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/636120249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.593 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.777 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.779 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11545MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.780 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:01:57 localhost nova_compute[274317]: 2026-02-01 10:01:57.780 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:01:58 localhost nova_compute[274317]: 2026-02-01 10:01:58.225 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 05:01:58 localhost nova_compute[274317]: 2026-02-01 10:01:58.226 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 05:01:58 localhost nova_compute[274317]: 2026-02-01 10:01:58.245 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:01:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 89 KiB/s wr, 8 op/s Feb 1 05:01:58 localhost nova_compute[274317]: 2026-02-01 10:01:58.492 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:01:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:01:58 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/196178825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:01:58 localhost nova_compute[274317]: 2026-02-01 10:01:58.706 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:01:58 localhost nova_compute[274317]: 2026-02-01 10:01:58.713 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 05:01:58 localhost nova_compute[274317]: 2026-02-01 10:01:58.733 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 05:01:58 localhost nova_compute[274317]: 2026-02-01 10:01:58.736 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 05:01:58 localhost nova_compute[274317]: 2026-02-01 10:01:58.736 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:01:59 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 85bf3dc4-239a-4ea6-b907-935513f36b9b) -- by 0 seconds Feb 1 05:01:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:01:59 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta.tmp' Feb 1 05:01:59 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta.tmp' to config b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta' Feb 1 05:01:59 localhost nova_compute[274317]: 2026-02-01 10:01:59.737 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:59 localhost nova_compute[274317]: 2026-02-01 10:01:59.738 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:59 localhost nova_compute[274317]: 2026-02-01 10:01:59.738 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:01:59 localhost nova_compute[274317]: 2026-02-01 10:01:59.739 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 05:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:01:59 localhost podman[316749]: 2026-02-01 10:01:59.857993603 +0000 UTC m=+0.063814870 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 1 05:01:59 localhost systemd[1]: tmp-crun.kzZ2Ew.mount: Deactivated successfully. Feb 1 05:01:59 localhost podman[316749]: 2026-02-01 10:01:59.915282618 +0000 UTC m=+0.121103925 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:01:59 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:01:59 localhost podman[316748]: 2026-02-01 10:01:59.916103183 +0000 UTC m=+0.123066326 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 1 05:01:59 localhost podman[316747]: 2026-02-01 10:01:59.974000158 +0000 UTC m=+0.183348305 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1769056855, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=) Feb 1 05:01:59 localhost podman[316755]: 2026-02-01 10:01:59.894710237 +0000 UTC m=+0.088444517 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 05:02:00 localhost podman[316748]: 2026-02-01 10:01:59.999881985 +0000 UTC m=+0.206845128 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent) Feb 1 05:02:00 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:02:00 localhost podman[316747]: 2026-02-01 10:02:00.014745608 +0000 UTC m=+0.224093735 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vendor=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, distribution-scope=public, version=9.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 1 05:02:00 localhost podman[236852]: time="2026-02-01T10:02:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:02:00 localhost podman[316755]: 2026-02-01 10:02:00.027723152 +0000 UTC m=+0.221457412 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 05:02:00 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:02:00 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:02:00 localhost podman[236852]: @ - - [01/Feb/2026:10:02:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157180 "" "Go-http-client/1.1" Feb 1 05:02:00 localhost podman[236852]: @ - - [01/Feb/2026:10:02:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18812 "" "Go-http-client/1.1" Feb 1 05:02:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 605 B/s rd, 84 KiB/s wr, 8 op/s Feb 1 05:02:01 localhost nova_compute[274317]: 2026-02-01 10:02:01.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:02:01 localhost openstack_network_exporter[239388]: ERROR 10:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:02:01 localhost openstack_network_exporter[239388]: Feb 1 05:02:01 localhost openstack_network_exporter[239388]: ERROR 10:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:02:01 localhost openstack_network_exporter[239388]: Feb 1 05:02:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fe07adb4-ff99-4e52-87fe-1c3a91d3e012", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:02:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fe07adb4-ff99-4e52-87fe-1c3a91d3e012, vol_name:cephfs) < "" Feb 1 05:02:02 localhost nova_compute[274317]: 2026-02-01 10:02:02.097 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 71 KiB/s wr, 6 op/s Feb 1 05:02:03 localhost ovn_controller[152787]: 2026-02-01T10:02:03Z|00257|ovn_bfd|INFO|Disabled BFD on interface ovn-2186fb-0 Feb 1 05:02:03 localhost ovn_controller[152787]: 2026-02-01T10:02:03Z|00258|ovn_bfd|INFO|Disabled BFD on interface ovn-e1cc33-0 Feb 1 05:02:03 localhost ovn_controller[152787]: 2026-02-01T10:02:03Z|00259|ovn_bfd|INFO|Disabled BFD on interface ovn-45aa31-0 Feb 1 05:02:03 localhost nova_compute[274317]: 2026-02-01 10:02:03.269 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:03 localhost nova_compute[274317]: 2026-02-01 10:02:03.274 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:03 localhost nova_compute[274317]: 2026-02-01 10:02:03.291 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:03 localhost dnsmasq[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/addn_hosts - 0 addresses Feb 1 05:02:03 localhost dnsmasq-dhcp[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/host Feb 1 05:02:03 localhost podman[316847]: 2026-02-01 10:02:03.410965363 +0000 UTC m=+0.059012651 container kill 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 1 05:02:03 localhost dnsmasq-dhcp[316598]: read /var/lib/neutron/dhcp/fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36/opts Feb 1 05:02:03 localhost nova_compute[274317]: 2026-02-01 10:02:03.526 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:03 localhost nova_compute[274317]: 2026-02-01 10:02:03.595 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:03 localhost ovn_controller[152787]: 2026-02-01T10:02:03Z|00260|binding|INFO|Releasing lport 87d2d119-c67b-45d9-ae55-273f194d0fcf from this chassis (sb_readonly=0) Feb 1 05:02:03 localhost ovn_controller[152787]: 2026-02-01T10:02:03Z|00261|binding|INFO|Setting lport 87d2d119-c67b-45d9-ae55-273f194d0fcf down in Southbound Feb 1 05:02:03 localhost kernel: device tap87d2d119-c6 left promiscuous mode Feb 1 05:02:03 localhost ovn_metadata_agent[158650]: 2026-02-01 10:02:03.604 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02ac6c4d149e42a78d91221782aba2a7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b87d577-bb4d-4fa7-8fd1-8b8b7a56f357, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=87d2d119-c67b-45d9-ae55-273f194d0fcf) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:02:03 localhost ovn_metadata_agent[158650]: 2026-02-01 10:02:03.606 158655 INFO neutron.agent.ovn.metadata.agent [-] Port 87d2d119-c67b-45d9-ae55-273f194d0fcf in datapath fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36 unbound from our chassis#033[00m Feb 1 05:02:03 localhost ovn_metadata_agent[158650]: 2026-02-01 10:02:03.609 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 05:02:03 localhost ovn_metadata_agent[158650]: 2026-02-01 10:02:03.610 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[ca189bc7-01e9-4cf8-bf2f-0389fcc85792]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 05:02:03 localhost nova_compute[274317]: 2026-02-01 10:02:03.617 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:04 localhost nova_compute[274317]: 2026-02-01 10:02:04.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:02:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 42 KiB/s wr, 5 op/s Feb 1 05:02:04 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 9c1c4137-22b3-4b8a-9eaf-875da7fa2508) -- by 0 seconds Feb 1 05:02:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:05 localhost dnsmasq[316598]: exiting on receipt of SIGTERM Feb 1 05:02:05 localhost podman[316885]: 2026-02-01 10:02:05.878089013 +0000 UTC m=+0.058027200 container kill 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:02:05 localhost systemd[1]: libpod-77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0.scope: Deactivated successfully. Feb 1 05:02:05 localhost podman[316899]: 2026-02-01 10:02:05.946616388 +0000 UTC m=+0.053721395 container died 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 1 05:02:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0-userdata-shm.mount: Deactivated successfully. Feb 1 05:02:05 localhost podman[316899]: 2026-02-01 10:02:05.980033429 +0000 UTC m=+0.087138396 container cleanup 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 05:02:05 localhost systemd[1]: libpod-conmon-77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0.scope: Deactivated successfully. Feb 1 05:02:06 localhost podman[316901]: 2026-02-01 10:02:06.022202864 +0000 UTC m=+0.122744517 container remove 77bd20e1269b5f386b0f73f6b117c1c70f23e235af2a80da658b2e5150aa51a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fdca4e7a-a2ed-4b3f-98b0-3078a16d3a36, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 1 05:02:06 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:02:06.059 259225 INFO neutron.agent.dhcp.agent [None req-3e38382e-78d3-418e-aff8-c11187cd272b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 05:02:06 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:02:06.059 259225 INFO neutron.agent.dhcp.agent [None req-3e38382e-78d3-418e-aff8-c11187cd272b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 05:02:06 localhost nova_compute[274317]: 2026-02-01 10:02:06.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:02:06 localhost nova_compute[274317]: 2026-02-01 10:02:06.296 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 42 KiB/s wr, 5 op/s Feb 1 05:02:06 localhost systemd[1]: var-lib-containers-storage-overlay-2ab4f9ef39bfd1dad63e076d392e8797c8a2c365aac7cee21797b99cee32b651-merged.mount: Deactivated successfully. Feb 1 05:02:06 localhost systemd[1]: run-netns-qdhcp\x2dfdca4e7a\x2da2ed\x2d4b3f\x2d98b0\x2d3078a16d3a36.mount: Deactivated successfully. Feb 1 05:02:07 localhost nova_compute[274317]: 2026-02-01 10:02:07.099 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 715 B/s rd, 48 KiB/s wr, 5 op/s Feb 1 05:02:08 localhost nova_compute[274317]: 2026-02-01 10:02:08.553 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.snap/7c62edf1-e706-4a7c-a38f-d41949f0e0ac/2fc8045e-224c-42d5-9fbe-50f9c8c8434b' to b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/3dbfb658-cf3e-4815-af2c-a2a39448d949' Feb 1 05:02:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fe07adb4-ff99-4e52-87fe-1c3a91d3e012/.meta.tmp' Feb 1 05:02:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fe07adb4-ff99-4e52-87fe-1c3a91d3e012/.meta.tmp' to config b'/volumes/_nogroup/fe07adb4-ff99-4e52-87fe-1c3a91d3e012/.meta' Feb 1 05:02:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fe07adb4-ff99-4e52-87fe-1c3a91d3e012, vol_name:cephfs) < "" Feb 1 05:02:09 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fe07adb4-ff99-4e52-87fe-1c3a91d3e012", "format": "json"}]: dispatch Feb 1 05:02:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fe07adb4-ff99-4e52-87fe-1c3a91d3e012, vol_name:cephfs) < "" Feb 1 05:02:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta.tmp' Feb 1 05:02:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta.tmp' to config b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta' Feb 1 05:02:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta.tmp' Feb 1 05:02:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta.tmp' to config b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta' Feb 1 05:02:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 25 KiB/s wr, 2 op/s Feb 1 05:02:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:02:10 localhost podman[316927]: 2026-02-01 10:02:10.871322879 +0000 UTC m=+0.085343312 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 1 05:02:10 localhost podman[316927]: 2026-02-01 10:02:10.885777619 +0000 UTC m=+0.099798052 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:02:10 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:02:12 localhost nova_compute[274317]: 2026-02-01 10:02:12.137 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 25 KiB/s wr, 2 op/s Feb 1 05:02:13 localhost nova_compute[274317]: 2026-02-01 10:02:13.597 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:02:13 localhost podman[316946]: 2026-02-01 10:02:13.862855611 +0000 UTC m=+0.077471835 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:02:13 localhost podman[316946]: 2026-02-01 10:02:13.897616404 +0000 UTC m=+0.112232668 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:02:13 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:02:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 853 B/s rd, 37 KiB/s wr, 4 op/s Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fe07adb4-ff99-4e52-87fe-1c3a91d3e012, vol_name:cephfs) < "" Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.snap/399a0ea4-3929-4405-9bc1-c3a475bd2a27/1fb3ab05-8c33-4b3d-b12a-80ef113677d2' to b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/427f0da2-51c7-4572-8a2f-14669b0cde52' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.clone_index] untracking 6737350b-dbf1-446c-8a6f-4f1713a6d13c Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta.tmp' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta.tmp' to config b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b/.meta' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 85bf3dc4-239a-4ea6-b907-935513f36b9b) Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta.tmp' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta.tmp' to config b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.clone_index] untracking c629a215-9f83-4c2c-abaf-00ba2a6ad08e Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta.tmp' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta.tmp' to config b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508/.meta' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 9c1c4137-22b3-4b8a-9eaf-875da7fa2508) Feb 1 05:02:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b862115e-9a5c-498d-a7c3-95dba802af7f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f/.meta.tmp' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f/.meta.tmp' to config b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f/.meta' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "554e72a1-8e1b-418d-8d4e-df4ac0aff10b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:554e72a1-8e1b-418d-8d4e-df4ac0aff10b, vol_name:cephfs) < "" Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/554e72a1-8e1b-418d-8d4e-df4ac0aff10b/.meta.tmp' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/554e72a1-8e1b-418d-8d4e-df4ac0aff10b/.meta.tmp' to config b'/volumes/_nogroup/554e72a1-8e1b-418d-8d4e-df4ac0aff10b/.meta' Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:554e72a1-8e1b-418d-8d4e-df4ac0aff10b, vol_name:cephfs) < "" Feb 1 05:02:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b862115e-9a5c-498d-a7c3-95dba802af7f", "format": "json"}]: dispatch Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "554e72a1-8e1b-418d-8d4e-df4ac0aff10b", "format": "json"}]: dispatch Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:554e72a1-8e1b-418d-8d4e-df4ac0aff10b, vol_name:cephfs) < "" Feb 1 05:02:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:554e72a1-8e1b-418d-8d4e-df4ac0aff10b, vol_name:cephfs) < "" Feb 1 05:02:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0ca5077e-5800-458a-bcdd-debc86c3a775", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:02:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0ca5077e-5800-458a-bcdd-debc86c3a775, vol_name:cephfs) < "" Feb 1 05:02:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0ca5077e-5800-458a-bcdd-debc86c3a775/.meta.tmp' Feb 1 05:02:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0ca5077e-5800-458a-bcdd-debc86c3a775/.meta.tmp' to config b'/volumes/_nogroup/0ca5077e-5800-458a-bcdd-debc86c3a775/.meta' Feb 1 05:02:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0ca5077e-5800-458a-bcdd-debc86c3a775, vol_name:cephfs) < "" Feb 1 05:02:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0ca5077e-5800-458a-bcdd-debc86c3a775", "format": "json"}]: dispatch Feb 1 05:02:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0ca5077e-5800-458a-bcdd-debc86c3a775, vol_name:cephfs) < "" Feb 1 05:02:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0ca5077e-5800-458a-bcdd-debc86c3a775, vol_name:cephfs) < "" Feb 1 05:02:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:16 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b862115e-9a5c-498d-a7c3-95dba802af7f", "snap_name": "79f87f1d-b2e3-46b8-8f19-152c3b678c27", "format": "json"}]: dispatch Feb 1 05:02:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:79f87f1d-b2e3-46b8-8f19-152c3b678c27, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:79f87f1d-b2e3-46b8-8f19-152c3b678c27, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 24 KiB/s wr, 3 op/s Feb 1 05:02:16 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0ca5077e-5800-458a-bcdd-debc86c3a775", "format": "json"}]: dispatch Feb 1 05:02:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0ca5077e-5800-458a-bcdd-debc86c3a775, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0ca5077e-5800-458a-bcdd-debc86c3a775, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:16 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:02:16.914+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0ca5077e-5800-458a-bcdd-debc86c3a775' of type subvolume Feb 1 05:02:16 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0ca5077e-5800-458a-bcdd-debc86c3a775' of type subvolume Feb 1 05:02:16 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0ca5077e-5800-458a-bcdd-debc86c3a775", "force": true, "format": "json"}]: dispatch Feb 1 05:02:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0ca5077e-5800-458a-bcdd-debc86c3a775, vol_name:cephfs) < "" Feb 1 05:02:16 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0ca5077e-5800-458a-bcdd-debc86c3a775'' moved to trashcan Feb 1 05:02:16 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:02:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0ca5077e-5800-458a-bcdd-debc86c3a775, vol_name:cephfs) < "" Feb 1 05:02:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fe07adb4-ff99-4e52-87fe-1c3a91d3e012", "format": "json"}]: dispatch Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fe07adb4-ff99-4e52-87fe-1c3a91d3e012, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:17 localhost nova_compute[274317]: 2026-02-01 10:02:17.168 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fe07adb4-ff99-4e52-87fe-1c3a91d3e012, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:17 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:02:17.170+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fe07adb4-ff99-4e52-87fe-1c3a91d3e012' of type subvolume Feb 1 05:02:17 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fe07adb4-ff99-4e52-87fe-1c3a91d3e012' of type subvolume Feb 1 05:02:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fe07adb4-ff99-4e52-87fe-1c3a91d3e012", "force": true, "format": "json"}]: dispatch Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fe07adb4-ff99-4e52-87fe-1c3a91d3e012, vol_name:cephfs) < "" Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fe07adb4-ff99-4e52-87fe-1c3a91d3e012'' moved to trashcan Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fe07adb4-ff99-4e52-87fe-1c3a91d3e012, vol_name:cephfs) < "" Feb 1 05:02:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "554e72a1-8e1b-418d-8d4e-df4ac0aff10b", "format": "json"}]: dispatch Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:554e72a1-8e1b-418d-8d4e-df4ac0aff10b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:554e72a1-8e1b-418d-8d4e-df4ac0aff10b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:17 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:02:17.514+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '554e72a1-8e1b-418d-8d4e-df4ac0aff10b' of type subvolume Feb 1 05:02:17 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '554e72a1-8e1b-418d-8d4e-df4ac0aff10b' of type subvolume Feb 1 05:02:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "554e72a1-8e1b-418d-8d4e-df4ac0aff10b", "force": true, "format": "json"}]: dispatch Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:554e72a1-8e1b-418d-8d4e-df4ac0aff10b, vol_name:cephfs) < "" Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/554e72a1-8e1b-418d-8d4e-df4ac0aff10b'' moved to trashcan Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:02:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:554e72a1-8e1b-418d-8d4e-df4ac0aff10b, vol_name:cephfs) < "" Feb 1 05:02:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 79 KiB/s wr, 8 op/s Feb 1 05:02:18 localhost nova_compute[274317]: 2026-02-01 10:02:18.662 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:20 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b862115e-9a5c-498d-a7c3-95dba802af7f", "snap_name": "79f87f1d-b2e3-46b8-8f19-152c3b678c27_f112651e-0368-4663-86fe-3d087644550a", "force": true, "format": "json"}]: dispatch Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79f87f1d-b2e3-46b8-8f19-152c3b678c27_f112651e-0368-4663-86fe-3d087644550a, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f/.meta.tmp' Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f/.meta.tmp' to config b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f/.meta' Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79f87f1d-b2e3-46b8-8f19-152c3b678c27_f112651e-0368-4663-86fe-3d087644550a, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:20 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b862115e-9a5c-498d-a7c3-95dba802af7f", "snap_name": "79f87f1d-b2e3-46b8-8f19-152c3b678c27", "force": true, "format": "json"}]: dispatch Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79f87f1d-b2e3-46b8-8f19-152c3b678c27, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f/.meta.tmp' Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f/.meta.tmp' to config b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f/.meta' Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79f87f1d-b2e3-46b8-8f19-152c3b678c27, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:20 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7de08017-7d25-4fb7-a96f-e3746cdc7d6f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7de08017-7d25-4fb7-a96f-e3746cdc7d6f, vol_name:cephfs) < "" Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7de08017-7d25-4fb7-a96f-e3746cdc7d6f/.meta.tmp' Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7de08017-7d25-4fb7-a96f-e3746cdc7d6f/.meta.tmp' to config b'/volumes/_nogroup/7de08017-7d25-4fb7-a96f-e3746cdc7d6f/.meta' Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7de08017-7d25-4fb7-a96f-e3746cdc7d6f, vol_name:cephfs) < "" Feb 1 05:02:20 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7de08017-7d25-4fb7-a96f-e3746cdc7d6f", "format": "json"}]: dispatch Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7de08017-7d25-4fb7-a96f-e3746cdc7d6f, vol_name:cephfs) < "" Feb 1 05:02:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7de08017-7d25-4fb7-a96f-e3746cdc7d6f, vol_name:cephfs) < "" Feb 1 05:02:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 68 KiB/s wr, 7 op/s Feb 1 05:02:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_10:02:21 Feb 1 05:02:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 05:02:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 05:02:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['volumes', 'manila_metadata', 'backups', '.mgr', 'images', 'vms', 'manila_data'] Feb 1 05:02:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 05:02:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:02:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:02:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:02:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:02:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:02:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-06 of space, bias 1.0, pg target 0.0005425347222222222 quantized to 32 (current 32) Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:02:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0006395911850921273 of space, bias 4.0, pg target 0.5091145833333334 quantized to 16 (current 16) Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:02:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:02:22 localhost nova_compute[274317]: 2026-02-01 10:02:22.198 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 68 KiB/s wr, 7 op/s Feb 1 05:02:23 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b862115e-9a5c-498d-a7c3-95dba802af7f", "format": "json"}]: dispatch Feb 1 05:02:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b862115e-9a5c-498d-a7c3-95dba802af7f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b862115e-9a5c-498d-a7c3-95dba802af7f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:23 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:02:23.279+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b862115e-9a5c-498d-a7c3-95dba802af7f' of type subvolume Feb 1 05:02:23 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b862115e-9a5c-498d-a7c3-95dba802af7f' of type subvolume Feb 1 05:02:23 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b862115e-9a5c-498d-a7c3-95dba802af7f", "force": true, "format": "json"}]: dispatch Feb 1 05:02:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:23 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b862115e-9a5c-498d-a7c3-95dba802af7f'' moved to trashcan Feb 1 05:02:23 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:02:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b862115e-9a5c-498d-a7c3-95dba802af7f, vol_name:cephfs) < "" Feb 1 05:02:23 localhost nova_compute[274317]: 2026-02-01 10:02:23.710 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:24 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7de08017-7d25-4fb7-a96f-e3746cdc7d6f", "format": "json"}]: dispatch Feb 1 05:02:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7de08017-7d25-4fb7-a96f-e3746cdc7d6f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7de08017-7d25-4fb7-a96f-e3746cdc7d6f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:24 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:02:24.008+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7de08017-7d25-4fb7-a96f-e3746cdc7d6f' of type subvolume Feb 1 05:02:24 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7de08017-7d25-4fb7-a96f-e3746cdc7d6f' of type subvolume Feb 1 05:02:24 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7de08017-7d25-4fb7-a96f-e3746cdc7d6f", "force": true, "format": "json"}]: dispatch Feb 1 05:02:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7de08017-7d25-4fb7-a96f-e3746cdc7d6f, vol_name:cephfs) < "" Feb 1 05:02:24 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7de08017-7d25-4fb7-a96f-e3746cdc7d6f'' moved to trashcan Feb 1 05:02:24 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:02:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7de08017-7d25-4fb7-a96f-e3746cdc7d6f, vol_name:cephfs) < "" Feb 1 05:02:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 92 KiB/s wr, 10 op/s Feb 1 05:02:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e275 e275: 6 total, 6 up, 6 in Feb 1 05:02:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 96 KiB/s wr, 10 op/s Feb 1 05:02:27 localhost nova_compute[274317]: 2026-02-01 10:02:27.238 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 68 KiB/s wr, 6 op/s Feb 1 05:02:28 localhost nova_compute[274317]: 2026-02-01 10:02:28.754 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:02:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/.meta.tmp' Feb 1 05:02:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/.meta.tmp' to config b'/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/.meta' Feb 1 05:02:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "format": "json"}]: dispatch Feb 1 05:02:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:30 localhost podman[236852]: time="2026-02-01T10:02:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:02:30 localhost podman[236852]: @ - - [01/Feb/2026:10:02:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:02:30 localhost podman[236852]: @ - - [01/Feb/2026:10:02:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18336 "" "Go-http-client/1.1" Feb 1 05:02:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 68 KiB/s wr, 6 op/s Feb 1 05:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:02:30 localhost podman[316969]: 2026-02-01 10:02:30.881260717 +0000 UTC m=+0.093155934 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, vcs-type=git, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.expose-services=, build-date=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, architecture=x86_64, container_name=openstack_network_exporter) Feb 1 05:02:30 localhost podman[316969]: 2026-02-01 10:02:30.892330272 +0000 UTC m=+0.104225519 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, name=ubi9/ubi-minimal, release=1769056855, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 05:02:30 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:02:30 localhost podman[316970]: 2026-02-01 10:02:30.938547402 +0000 UTC m=+0.145311670 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Feb 1 05:02:30 localhost podman[316970]: 2026-02-01 10:02:30.947681996 +0000 UTC m=+0.154446274 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Feb 1 05:02:30 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:02:30 localhost podman[316977]: 2026-02-01 10:02:30.998643885 +0000 UTC m=+0.199942553 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 05:02:31 localhost podman[316977]: 2026-02-01 10:02:31.006878562 +0000 UTC m=+0.208177240 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 05:02:31 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:02:31 localhost podman[316971]: 2026-02-01 10:02:31.094859643 +0000 UTC m=+0.299969199 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 1 05:02:31 localhost podman[316971]: 2026-02-01 10:02:31.185881941 +0000 UTC m=+0.390991427 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Feb 1 05:02:31 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:02:31 localhost openstack_network_exporter[239388]: ERROR 10:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:02:31 localhost openstack_network_exporter[239388]: Feb 1 05:02:31 localhost openstack_network_exporter[239388]: ERROR 10:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:02:31 localhost openstack_network_exporter[239388]: Feb 1 05:02:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e276 e276: 6 total, 6 up, 6 in Feb 1 05:02:32 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:02:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:02:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:02:32 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:02:32 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:02:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:02:32 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:32 localhost nova_compute[274317]: 2026-02-01 10:02:32.282 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:02:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 49 KiB/s wr, 3 op/s Feb 1 05:02:32 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:02:32 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:32 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:32 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:02:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "85bf3dc4-239a-4ea6-b907-935513f36b9b", "format": "json"}]: dispatch Feb 1 05:02:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:33 localhost nova_compute[274317]: 2026-02-01 10:02:33.794 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 326 B/s rd, 72 KiB/s wr, 5 op/s Feb 1 05:02:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "85bf3dc4-239a-4ea6-b907-935513f36b9b", "format": "json"}]: dispatch Feb 1 05:02:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, vol_name:cephfs) < "" Feb 1 05:02:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, vol_name:cephfs) < "" Feb 1 05:02:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9c1c4137-22b3-4b8a-9eaf-875da7fa2508", "format": "json"}]: dispatch Feb 1 05:02:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:36 localhost ovn_controller[152787]: 2026-02-01T10:02:36Z|00262|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Feb 1 05:02:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 67 KiB/s wr, 4 op/s Feb 1 05:02:37 localhost nova_compute[274317]: 2026-02-01 10:02:37.314 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 45 KiB/s wr, 3 op/s Feb 1 05:02:38 localhost nova_compute[274317]: 2026-02-01 10:02:38.837 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9c1c4137-22b3-4b8a-9eaf-875da7fa2508", "format": "json"}]: dispatch Feb 1 05:02:39 localhost systemd-journald[47940]: Data hash table of /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Feb 1 05:02:39 localhost systemd-journald[47940]: /run/log/journal/00836dadc27b01f9fb0a211cca69e688/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 1 05:02:39 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, vol_name:cephfs) < "" Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, vol_name:cephfs) < "" Feb 1 05:02:39 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 1 05:02:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "85bf3dc4-239a-4ea6-b907-935513f36b9b", "format": "json"}]: dispatch Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "85bf3dc4-239a-4ea6-b907-935513f36b9b", "force": true, "format": "json"}]: dispatch Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, vol_name:cephfs) < "" Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/85bf3dc4-239a-4ea6-b907-935513f36b9b'' moved to trashcan Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:85bf3dc4-239a-4ea6-b907-935513f36b9b, vol_name:cephfs) < "" Feb 1 05:02:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:02:39 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:02:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 1 05:02:39 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:02:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 45 KiB/s wr, 3 op/s Feb 1 05:02:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d7c65253-5d6c-4617-9070-0a8b5ac1c2b2", "snap_name": "7c62edf1-e706-4a7c-a38f-d41949f0e0ac_275fdc7f-d005-4a46-b0bc-2dc898355e68", "force": true, "format": "json"}]: dispatch Feb 1 05:02:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c62edf1-e706-4a7c-a38f-d41949f0e0ac_275fdc7f-d005-4a46-b0bc-2dc898355e68, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:02:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' Feb 1 05:02:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta' Feb 1 05:02:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c62edf1-e706-4a7c-a38f-d41949f0e0ac_275fdc7f-d005-4a46-b0bc-2dc898355e68, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:02:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d7c65253-5d6c-4617-9070-0a8b5ac1c2b2", "snap_name": "7c62edf1-e706-4a7c-a38f-d41949f0e0ac", "force": true, "format": "json"}]: dispatch Feb 1 05:02:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c62edf1-e706-4a7c-a38f-d41949f0e0ac, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:02:40 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:02:40 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:02:40 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:02:40 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 1 05:02:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' Feb 1 05:02:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta.tmp' to config b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2/.meta' Feb 1 05:02:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c62edf1-e706-4a7c-a38f-d41949f0e0ac, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:02:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9c1c4137-22b3-4b8a-9eaf-875da7fa2508", "format": "json"}]: dispatch Feb 1 05:02:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9c1c4137-22b3-4b8a-9eaf-875da7fa2508", "force": true, "format": "json"}]: dispatch Feb 1 05:02:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, vol_name:cephfs) < "" Feb 1 05:02:41 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9c1c4137-22b3-4b8a-9eaf-875da7fa2508'' moved to trashcan Feb 1 05:02:41 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:02:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9c1c4137-22b3-4b8a-9eaf-875da7fa2508, vol_name:cephfs) < "" Feb 1 05:02:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:02:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:02:41.778 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:02:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:02:41.779 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:02:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:02:41.779 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:02:41 localhost systemd[1]: tmp-crun.z1jA5m.mount: Deactivated successfully. Feb 1 05:02:41 localhost podman[317055]: 2026-02-01 10:02:41.87620353 +0000 UTC m=+0.094396594 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 05:02:41 localhost podman[317055]: 2026-02-01 10:02:41.915757042 +0000 UTC m=+0.133950106 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:02:41 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:02:42 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:02:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:02:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:02:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:02:42 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:02:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:02:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:02:42 localhost nova_compute[274317]: 2026-02-01 10:02:42.348 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 98 B/s rd, 44 KiB/s wr, 3 op/s Feb 1 05:02:42 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:02:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:42 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:42 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:02:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d7c65253-5d6c-4617-9070-0a8b5ac1c2b2", "format": "json"}]: dispatch Feb 1 05:02:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:43 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:02:43.724+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7c65253-5d6c-4617-9070-0a8b5ac1c2b2' of type subvolume Feb 1 05:02:43 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7c65253-5d6c-4617-9070-0a8b5ac1c2b2' of type subvolume Feb 1 05:02:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d7c65253-5d6c-4617-9070-0a8b5ac1c2b2", "force": true, "format": "json"}]: dispatch Feb 1 05:02:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:02:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d7c65253-5d6c-4617-9070-0a8b5ac1c2b2'' moved to trashcan Feb 1 05:02:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:02:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7c65253-5d6c-4617-9070-0a8b5ac1c2b2, vol_name:cephfs) < "" Feb 1 05:02:43 localhost nova_compute[274317]: 2026-02-01 10:02:43.870 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0871d823-23d6-4b37-9920-427f6d28d0fb", "snap_name": "399a0ea4-3929-4405-9bc1-c3a475bd2a27_86b1b596-8b78-46fd-b8a1-a58d7c899be4", "force": true, "format": "json"}]: dispatch Feb 1 05:02:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:399a0ea4-3929-4405-9bc1-c3a475bd2a27_86b1b596-8b78-46fd-b8a1-a58d7c899be4, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:02:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' Feb 1 05:02:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta' Feb 1 05:02:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:399a0ea4-3929-4405-9bc1-c3a475bd2a27_86b1b596-8b78-46fd-b8a1-a58d7c899be4, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:02:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0871d823-23d6-4b37-9920-427f6d28d0fb", "snap_name": "399a0ea4-3929-4405-9bc1-c3a475bd2a27", "force": true, "format": "json"}]: dispatch Feb 1 05:02:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:399a0ea4-3929-4405-9bc1-c3a475bd2a27, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:02:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' Feb 1 05:02:44 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta.tmp' to config b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb/.meta' Feb 1 05:02:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:399a0ea4-3929-4405-9bc1-c3a475bd2a27, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:02:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 107 KiB/s wr, 7 op/s Feb 1 05:02:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e277 e277: 6 total, 6 up, 6 in Feb 1 05:02:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:02:44 localhost systemd[1]: tmp-crun.fu87bi.mount: Deactivated successfully. Feb 1 05:02:44 localhost podman[317074]: 2026-02-01 10:02:44.845020543 +0000 UTC m=+0.067571886 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 1 05:02:44 localhost podman[317074]: 2026-02-01 10:02:44.853423765 +0000 UTC m=+0.075975128 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 05:02:44 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:02:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:45 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:02:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:02:45 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:02:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 1 05:02:45 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:02:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e278 e278: 6 total, 6 up, 6 in Feb 1 05:02:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:45 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:02:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:02:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:02:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 104 KiB/s wr, 7 op/s Feb 1 05:02:46 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:02:46 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:02:46 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:02:46 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 1 05:02:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 05:02:46 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 05:02:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 05:02:46 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:02:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 05:02:46 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev add8c773-a56c-48c6-af63-e78b8a024ad6 (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:02:46 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev add8c773-a56c-48c6-af63-e78b8a024ad6 (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:02:46 localhost ceph-mgr[278126]: [progress INFO root] Completed event add8c773-a56c-48c6-af63-e78b8a024ad6 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 05:02:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 05:02:46 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 05:02:47 localhost nova_compute[274317]: 2026-02-01 10:02:47.385 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:47 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0871d823-23d6-4b37-9920-427f6d28d0fb", "format": "json"}]: dispatch Feb 1 05:02:47 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0871d823-23d6-4b37-9920-427f6d28d0fb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:47 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0871d823-23d6-4b37-9920-427f6d28d0fb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:02:47 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:02:47.444+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0871d823-23d6-4b37-9920-427f6d28d0fb' of type subvolume Feb 1 05:02:47 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0871d823-23d6-4b37-9920-427f6d28d0fb' of type subvolume Feb 1 05:02:47 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0871d823-23d6-4b37-9920-427f6d28d0fb", "force": true, "format": "json"}]: dispatch Feb 1 05:02:47 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:02:47 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0871d823-23d6-4b37-9920-427f6d28d0fb'' moved to trashcan Feb 1 05:02:47 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:02:47 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0871d823-23d6-4b37-9920-427f6d28d0fb, vol_name:cephfs) < "" Feb 1 05:02:47 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:02:47 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:02:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 185 KiB/s wr, 15 op/s Feb 1 05:02:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:02:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:02:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:02:48 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:02:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice_bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:02:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:02:48 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:02:48 localhost nova_compute[274317]: 2026-02-01 10:02:48.875 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:49 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:02:49 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:49 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:49 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:02:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 185 KiB/s wr, 15 op/s Feb 1 05:02:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:02:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:02:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:02:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:02:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:02:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:02:51 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 05:02:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 05:02:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e279 e279: 6 total, 6 up, 6 in Feb 1 05:02:52 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:02:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:02:52 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:02:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 1 05:02:52 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:02:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:52 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:02:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:52 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:02:52 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:02:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.0 KiB/s rd, 81 KiB/s wr, 8 op/s Feb 1 05:02:52 localhost nova_compute[274317]: 2026-02-01 10:02:52.432 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:52 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:02:52 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:02:52 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:02:52 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:02:52 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 1 05:02:53 localhost nova_compute[274317]: 2026-02-01 10:02:53.910 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 136 KiB/s wr, 12 op/s Feb 1 05:02:55 localhost nova_compute[274317]: 2026-02-01 10:02:55.121 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:02:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:02:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:02:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:02:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:02:55 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:02:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice_bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:02:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:02:55 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:55 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:02:55 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:55 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:02:55 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:02:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:02:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 121 KiB/s wr, 11 op/s Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.114 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.115 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.115 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.206 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.207 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.224 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.224 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.225 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.225 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.226 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.487 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:02:57 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1141303840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.668 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.867 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.868 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11536MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.869 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.869 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.961 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.962 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 05:02:57 localhost nova_compute[274317]: 2026-02-01 10:02:57.992 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing inventories for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.009 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating ProviderTree inventory for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.009 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Updating inventory in ProviderTree for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.047 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing aggregate associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.104 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Refreshing trait associations for resource provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.125 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:02:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 89 KiB/s wr, 7 op/s Feb 1 05:02:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cad5faf1-ff59-4c07-a06b-c60dd8871573", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:02:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:02:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:02:58 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2492180565' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.568 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.576 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.607 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 05:02:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573/.meta.tmp' Feb 1 05:02:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573/.meta.tmp' to config b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573/.meta' Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.610 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 05:02:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.611 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:02:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cad5faf1-ff59-4c07-a06b-c60dd8871573", "format": "json"}]: dispatch Feb 1 05:02:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:02:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:02:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:02:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:58 localhost nova_compute[274317]: 2026-02-01 10:02:58.952 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:02:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:02:58 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:02:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 1 05:02:58 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:59 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:02:59 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6cb6a1d-1311-44d1-b899-a95bbef2e51f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:02:59 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:02:59 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:02:59 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:02:59 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta' Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:02:59 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6cb6a1d-1311-44d1-b899-a95bbef2e51f", "format": "json"}]: dispatch Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:02:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:02:59 localhost nova_compute[274317]: 2026-02-01 10:02:59.505 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:02:59 localhost nova_compute[274317]: 2026-02-01 10:02:59.506 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:02:59 localhost nova_compute[274317]: 2026-02-01 10:02:59.506 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:03:00 localhost podman[236852]: time="2026-02-01T10:03:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:03:00 localhost podman[236852]: @ - - [01/Feb/2026:10:03:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:03:00 localhost podman[236852]: @ - - [01/Feb/2026:10:03:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18330 "" "Go-http-client/1.1" Feb 1 05:03:00 localhost nova_compute[274317]: 2026-02-01 10:03:00.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:03:00 localhost nova_compute[274317]: 2026-02-01 10:03:00.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 05:03:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 89 KiB/s wr, 7 op/s Feb 1 05:03:01 localhost openstack_network_exporter[239388]: ERROR 10:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:03:01 localhost openstack_network_exporter[239388]: Feb 1 05:03:01 localhost openstack_network_exporter[239388]: ERROR 10:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:03:01 localhost openstack_network_exporter[239388]: Feb 1 05:03:01 localhost ovn_metadata_agent[158650]: 2026-02-01 10:03:01.635 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:03:01 localhost nova_compute[274317]: 2026-02-01 10:03:01.635 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:01 localhost ovn_metadata_agent[158650]: 2026-02-01 10:03:01.637 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 05:03:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cad5faf1-ff59-4c07-a06b-c60dd8871573", "snap_name": "2868f0e0-7db3-4bfb-b89b-d896cb2f8687", "format": "json"}]: dispatch Feb 1 05:03:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2868f0e0-7db3-4bfb-b89b-d896cb2f8687, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:03:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2868f0e0-7db3-4bfb-b89b-d896cb2f8687, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:03:01 localhost systemd[1]: tmp-crun.quAkkL.mount: Deactivated successfully. Feb 1 05:03:01 localhost podman[317231]: 2026-02-01 10:03:01.884020107 +0000 UTC m=+0.087311670 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 1 05:03:01 localhost systemd[1]: tmp-crun.SqW7XY.mount: Deactivated successfully. Feb 1 05:03:01 localhost podman[317232]: 2026-02-01 10:03:01.929395335 +0000 UTC m=+0.128814678 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:03:01 localhost podman[317243]: 2026-02-01 10:03:01.89248108 +0000 UTC m=+0.081141059 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 05:03:01 localhost podman[317232]: 2026-02-01 10:03:01.958616731 +0000 UTC m=+0.158036094 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 05:03:01 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:03:01 localhost podman[317231]: 2026-02-01 10:03:01.968728345 +0000 UTC m=+0.172019948 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 1 05:03:01 localhost podman[317243]: 2026-02-01 10:03:01.976833507 +0000 UTC m=+0.165493486 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 05:03:01 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:03:01 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:03:02 localhost podman[317230]: 2026-02-01 10:03:02.041910276 +0000 UTC m=+0.246383596 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, name=ubi9/ubi-minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, version=9.7, config_id=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 05:03:02 localhost podman[317230]: 2026-02-01 10:03:02.058656166 +0000 UTC m=+0.263129486 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, distribution-scope=public, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, managed_by=edpm_ansible, config_id=openstack_network_exporter, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 1 05:03:02 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:03:02 localhost nova_compute[274317]: 2026-02-01 10:03:02.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:03:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:03:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:03:02 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:02 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:03:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:02 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 294 B/s rd, 86 KiB/s wr, 7 op/s Feb 1 05:03:02 localhost nova_compute[274317]: 2026-02-01 10:03:02.490 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b6cb6a1d-1311-44d1-b899-a95bbef2e51f", "snap_name": "4fa60cbb-7815-4e58-abbf-0715923dbf39", "format": "json"}]: dispatch Feb 1 05:03:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4fa60cbb-7815-4e58-abbf-0715923dbf39, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:03:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4fa60cbb-7815-4e58-abbf-0715923dbf39, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:03:02 localhost systemd[1]: tmp-crun.IVPB8c.mount: Deactivated successfully. Feb 1 05:03:03 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:03 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:03 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:03 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.409 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:03:03.414 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:03:03 localhost nova_compute[274317]: 2026-02-01 10:03:03.987 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 131 KiB/s wr, 11 op/s Feb 1 05:03:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cad5faf1-ff59-4c07-a06b-c60dd8871573", "snap_name": "2868f0e0-7db3-4bfb-b89b-d896cb2f8687_a43bf7eb-b41e-4214-bacc-27135e2bb93d", "force": true, "format": "json"}]: dispatch Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2868f0e0-7db3-4bfb-b89b-d896cb2f8687_a43bf7eb-b41e-4214-bacc-27135e2bb93d, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573/.meta.tmp' Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573/.meta.tmp' to config b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573/.meta' Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2868f0e0-7db3-4bfb-b89b-d896cb2f8687_a43bf7eb-b41e-4214-bacc-27135e2bb93d, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:03:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cad5faf1-ff59-4c07-a06b-c60dd8871573", "snap_name": "2868f0e0-7db3-4bfb-b89b-d896cb2f8687", "force": true, "format": "json"}]: dispatch Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2868f0e0-7db3-4bfb-b89b-d896cb2f8687, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573/.meta.tmp' Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573/.meta.tmp' to config b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573/.meta' Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2868f0e0-7db3-4bfb-b89b-d896cb2f8687, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:03:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:03:05 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 1 05:03:05 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:06 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:06 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:06 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:06 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 1 05:03:06 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "b6cb6a1d-1311-44d1-b899-a95bbef2e51f", "snap_name": "4fa60cbb-7815-4e58-abbf-0715923dbf39", "target_sub_name": "a68d53cc-1ebe-4c8a-93d3-742bd1afa895", "format": "json"}]: dispatch Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:4fa60cbb-7815-4e58-abbf-0715923dbf39, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, target_sub_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, vol_name:cephfs) < "" Feb 1 05:03:06 localhost nova_compute[274317]: 2026-02-01 10:03:06.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta.tmp' Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta.tmp' to config b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta' Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 7dfead6c-cfba-4d88-894e-6b3b4ee708c8 for path b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895' Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta' Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:4fa60cbb-7815-4e58-abbf-0715923dbf39, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, target_sub_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, vol_name:cephfs) < "" Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895 Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, a68d53cc-1ebe-4c8a-93d3-742bd1afa895) Feb 1 05:03:06 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a68d53cc-1ebe-4c8a-93d3-742bd1afa895", "format": "json"}]: dispatch Feb 1 05:03:06 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 84 KiB/s wr, 7 op/s Feb 1 05:03:07 localhost nova_compute[274317]: 2026-02-01 10:03:07.529 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 128 KiB/s wr, 10 op/s Feb 1 05:03:09 localhost nova_compute[274317]: 2026-02-01 10:03:09.031 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, a68d53cc-1ebe-4c8a-93d3-742bd1afa895) -- by 0 seconds Feb 1 05:03:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta.tmp' Feb 1 05:03:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta.tmp' to config b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta' Feb 1 05:03:09 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:03:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:09 localhost ovn_metadata_agent[158650]: 2026-02-01 10:03:09.640 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 05:03:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e280 e280: 6 total, 6 up, 6 in Feb 1 05:03:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 121 KiB/s wr, 9 op/s Feb 1 05:03:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 121 KiB/s wr, 9 op/s Feb 1 05:03:12 localhost nova_compute[274317]: 2026-02-01 10:03:12.532 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:03:12 localhost podman[317314]: 2026-02-01 10:03:12.875085631 +0000 UTC m=+0.087038912 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true) Feb 1 05:03:12 localhost podman[317314]: 2026-02-01 10:03:12.890911072 +0000 UTC m=+0.102864323 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 05:03:12 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:03:14 localhost nova_compute[274317]: 2026-02-01 10:03:14.083 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 69 KiB/s wr, 6 op/s Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.snap/4fa60cbb-7815-4e58-abbf-0715923dbf39/f0336b65-9c01-40bb-a388-6b61617a489b' to b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/1314091c-97e9-4a58-b3cc-ccc6f0168b91' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "format": "json"}]: dispatch Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta.tmp' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta.tmp' to config b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.clone_index] untracking 7dfead6c-cfba-4d88-894e-6b3b4ee708c8 Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta.tmp' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta.tmp' to config b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895/.meta' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, a68d53cc-1ebe-4c8a-93d3-742bd1afa895) Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cad5faf1-ff59-4c07-a06b-c60dd8871573", "format": "json"}]: dispatch Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:14 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:03:14.639+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cad5faf1-ff59-4c07-a06b-c60dd8871573' of type subvolume Feb 1 05:03:14 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cad5faf1-ff59-4c07-a06b-c60dd8871573' of type subvolume Feb 1 05:03:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cad5faf1-ff59-4c07-a06b-c60dd8871573", "force": true, "format": "json"}]: dispatch Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cad5faf1-ff59-4c07-a06b-c60dd8871573'' moved to trashcan Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cad5faf1-ff59-4c07-a06b-c60dd8871573, vol_name:cephfs) < "" Feb 1 05:03:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:03:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:14 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:03:14 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:14 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:03:14 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:14 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:15 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:15 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:15 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:15 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:03:15 localhost podman[317333]: 2026-02-01 10:03:15.859080365 +0000 UTC m=+0.076234837 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 05:03:15 localhost podman[317333]: 2026-02-01 10:03:15.867372622 +0000 UTC m=+0.084527114 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 05:03:15 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:03:16 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:03:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:03:16 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:16 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 1 05:03:16 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:16 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:03:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:16 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:03:16 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:16 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 69 KiB/s wr, 6 op/s Feb 1 05:03:16 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:16 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:16 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:16 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 1 05:03:17 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e281 e281: 6 total, 6 up, 6 in Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.024588) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940197024627, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1952, "num_deletes": 259, "total_data_size": 3027823, "memory_usage": 3166192, "flush_reason": "Manual Compaction"} Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940197035141, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1603814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28419, "largest_seqno": 30366, "table_properties": {"data_size": 1597125, "index_size": 3518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 18887, "raw_average_key_size": 22, "raw_value_size": 1581988, "raw_average_value_size": 1878, "num_data_blocks": 153, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769940100, "oldest_key_time": 1769940100, "file_creation_time": 1769940197, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 10610 microseconds, and 5244 cpu microseconds. Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.035197) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1603814 bytes OK Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.035220) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.037366) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.037388) EVENT_LOG_v1 {"time_micros": 1769940197037381, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.037408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3018436, prev total WAL file size 3018436, number of live WAL files 2. Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.038519) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303130' seq:72057594037927935, type:22 .. '6D6772737461740034323631' seq:0, type:0; will stop at (end) Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1566KB)], [42(21MB)] Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940197038568, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 24150978, "oldest_snapshot_seqno": -1} Feb 1 05:03:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "696b3ce7-12ee-4387-9824-52b489c80aa5", "format": "json"}]: dispatch Feb 1 05:03:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:696b3ce7-12ee-4387-9824-52b489c80aa5, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:696b3ce7-12ee-4387-9824-52b489c80aa5, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 14116 keys, 22384912 bytes, temperature: kUnknown Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940197186888, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 22384912, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 22303771, "index_size": 44752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35333, "raw_key_size": 378432, "raw_average_key_size": 26, "raw_value_size": 22063143, "raw_average_value_size": 1562, "num_data_blocks": 1670, "num_entries": 14116, "num_filter_entries": 14116, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769940197, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.187215) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 22384912 bytes Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.189089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.7 rd, 150.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 21.5 +0.0 blob) out(21.3 +0.0 blob), read-write-amplify(29.0) write-amplify(14.0) OK, records in: 14614, records dropped: 498 output_compression: NoCompression Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.189116) EVENT_LOG_v1 {"time_micros": 1769940197189104, "job": 24, "event": "compaction_finished", "compaction_time_micros": 148411, "compaction_time_cpu_micros": 57141, "output_level": 6, "num_output_files": 1, "total_output_size": 22384912, "num_input_records": 14614, "num_output_records": 14116, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940197189483, "job": 24, "event": "table_file_deletion", "file_number": 44} Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940197192163, "job": 24, "event": "table_file_deletion", "file_number": 42} Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.038415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.192275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.192281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.192310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.192315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:17 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:17.192319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:17 localhost nova_compute[274317]: 2026-02-01 10:03:17.536 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 989 B/s rd, 145 KiB/s wr, 11 op/s Feb 1 05:03:19 localhost nova_compute[274317]: 2026-02-01 10:03:19.111 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:19 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:03:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:19 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:03:19 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:19 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:03:19 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:19 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:20 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:20 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:20 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:20 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:20 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "848c4aa1-8ab7-4a67-9f9a-90c43fa4d8b0", "format": "json"}]: dispatch Feb 1 05:03:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:848c4aa1-8ab7-4a67-9f9a-90c43fa4d8b0, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:848c4aa1-8ab7-4a67-9f9a-90c43fa4d8b0, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 120 KiB/s wr, 9 op/s Feb 1 05:03:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_10:03:21 Feb 1 05:03:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 05:03:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 05:03:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'backups', 'manila_data', 'images', 'manila_metadata'] Feb 1 05:03:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 05:03:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:03:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:03:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:03:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:03:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:03:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.9084135957565606e-06 of space, bias 1.0, pg target 0.00037977430555555556 quantized to 32 (current 32) Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:03:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.001023727578866555 of space, bias 4.0, pg target 0.8148871527777778 quantized to 16 (current 16) Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:03:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:03:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 120 KiB/s wr, 9 op/s Feb 1 05:03:22 localhost nova_compute[274317]: 2026-02-01 10:03:22.573 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:23 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:03:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:03:23 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:23 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 1 05:03:23 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:03:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:23 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:03:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:23 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:03:23 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:24 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "848c4aa1-8ab7-4a67-9f9a-90c43fa4d8b0_d8325836-905a-4ab3-a5ea-7befa865b70c", "force": true, "format": "json"}]: dispatch Feb 1 05:03:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:848c4aa1-8ab7-4a67-9f9a-90c43fa4d8b0_d8325836-905a-4ab3-a5ea-7befa865b70c, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:24 localhost nova_compute[274317]: 2026-02-01 10:03:24.127 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:24 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:24 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:848c4aa1-8ab7-4a67-9f9a-90c43fa4d8b0_d8325836-905a-4ab3-a5ea-7befa865b70c, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:24 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "848c4aa1-8ab7-4a67-9f9a-90c43fa4d8b0", "force": true, "format": "json"}]: dispatch Feb 1 05:03:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:848c4aa1-8ab7-4a67-9f9a-90c43fa4d8b0, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:24 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:03:24 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:03:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 1 05:03:24 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:24 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:848c4aa1-8ab7-4a67-9f9a-90c43fa4d8b0, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 132 KiB/s wr, 10 op/s Feb 1 05:03:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:26 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:03:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:03:26 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:03:26 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:26 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 132 KiB/s wr, 10 op/s Feb 1 05:03:27 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:27 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:27 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:27 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "8d37ec79-a49b-4fd8-8bd5-a62773d06fd4", "format": "json"}]: dispatch Feb 1 05:03:27 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8d37ec79-a49b-4fd8-8bd5-a62773d06fd4, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:27 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8d37ec79-a49b-4fd8-8bd5-a62773d06fd4, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:27 localhost nova_compute[274317]: 2026-02-01 10:03:27.616 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 627 B/s rd, 194 KiB/s wr, 14 op/s Feb 1 05:03:29 localhost nova_compute[274317]: 2026-02-01 10:03:29.162 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:03:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:03:29 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 1 05:03:29 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:03:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:03:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:03:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:30 localhost podman[236852]: time="2026-02-01T10:03:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:03:30 localhost podman[236852]: @ - - [01/Feb/2026:10:03:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:03:30 localhost podman[236852]: @ - - [01/Feb/2026:10:03:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18330 "" "Go-http-client/1.1" Feb 1 05:03:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e282 e282: 6 total, 6 up, 6 in Feb 1 05:03:30 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:03:30 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:30 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:03:30 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 1 05:03:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 118 KiB/s wr, 9 op/s Feb 1 05:03:31 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "8d37ec79-a49b-4fd8-8bd5-a62773d06fd4_b5ce561f-cd5d-411f-bf75-a44ec13260f2", "force": true, "format": "json"}]: dispatch Feb 1 05:03:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8d37ec79-a49b-4fd8-8bd5-a62773d06fd4_b5ce561f-cd5d-411f-bf75-a44ec13260f2, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:31 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:31 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8d37ec79-a49b-4fd8-8bd5-a62773d06fd4_b5ce561f-cd5d-411f-bf75-a44ec13260f2, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:31 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "8d37ec79-a49b-4fd8-8bd5-a62773d06fd4", "force": true, "format": "json"}]: dispatch Feb 1 05:03:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8d37ec79-a49b-4fd8-8bd5-a62773d06fd4, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:31 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:31 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8d37ec79-a49b-4fd8-8bd5-a62773d06fd4, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:31 localhost openstack_network_exporter[239388]: ERROR 10:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:03:31 localhost openstack_network_exporter[239388]: Feb 1 05:03:31 localhost openstack_network_exporter[239388]: ERROR 10:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:03:31 localhost openstack_network_exporter[239388]: Feb 1 05:03:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 118 KiB/s wr, 9 op/s Feb 1 05:03:32 localhost nova_compute[274317]: 2026-02-01 10:03:32.652 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:03:32 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:03:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:03:32 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:03:32 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice_bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:03:32 localhost podman[317360]: 2026-02-01 10:03:32.893885268 +0000 UTC m=+0.105557447 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:03:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:32 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:32 localhost podman[317360]: 2026-02-01 10:03:32.927115569 +0000 UTC m=+0.138787738 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Feb 1 05:03:32 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:03:32 localhost podman[317361]: 2026-02-01 10:03:32.954556321 +0000 UTC m=+0.162707540 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 1 05:03:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:32 localhost podman[317361]: 2026-02-01 10:03:32.999202756 +0000 UTC m=+0.207353975 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 1 05:03:33 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:03:33 localhost podman[317359]: 2026-02-01 10:03:33.034997406 +0000 UTC m=+0.249947527 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, version=9.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 1 05:03:33 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:03:33 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:33 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:33 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:33 localhost podman[317359]: 2026-02-01 10:03:33.046185503 +0000 UTC m=+0.261135644 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, version=9.7, vcs-type=git, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 05:03:33 localhost podman[317363]: 2026-02-01 10:03:33.003584971 +0000 UTC m=+0.206167387 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 05:03:33 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:03:33 localhost podman[317363]: 2026-02-01 10:03:33.086629409 +0000 UTC m=+0.289211815 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 05:03:33 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:03:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a68d53cc-1ebe-4c8a-93d3-742bd1afa895", "format": "json"}]: dispatch Feb 1 05:03:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:34 localhost nova_compute[274317]: 2026-02-01 10:03:34.211 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a68d53cc-1ebe-4c8a-93d3-742bd1afa895", "format": "json"}]: dispatch Feb 1 05:03:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, vol_name:cephfs) < "" Feb 1 05:03:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, vol_name:cephfs) < "" Feb 1 05:03:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 134 KiB/s wr, 9 op/s Feb 1 05:03:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:03:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1594616730' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:03:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:03:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1594616730' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:03:34 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "13f167ab-ebf7-4b96-af44-07e5f74ed867", "format": "json"}]: dispatch Feb 1 05:03:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:13f167ab-ebf7-4b96-af44-07e5f74ed867, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:34 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:13f167ab-ebf7-4b96-af44-07e5f74ed867, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e283 e283: 6 total, 6 up, 6 in Feb 1 05:03:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:03:36 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:03:36 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 1 05:03:36 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 56 KiB/s wr, 3 op/s Feb 1 05:03:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "637d1a6c-3835-4ba9-9fda-e6c8c27dede1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, vol_name:cephfs) < "" Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/637d1a6c-3835-4ba9-9fda-e6c8c27dede1/.meta.tmp' Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/637d1a6c-3835-4ba9-9fda-e6c8c27dede1/.meta.tmp' to config b'/volumes/_nogroup/637d1a6c-3835-4ba9-9fda-e6c8c27dede1/.meta' Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, vol_name:cephfs) < "" Feb 1 05:03:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "637d1a6c-3835-4ba9-9fda-e6c8c27dede1", "format": "json"}]: dispatch Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, vol_name:cephfs) < "" Feb 1 05:03:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, vol_name:cephfs) < "" Feb 1 05:03:37 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e284 e284: 6 total, 6 up, 6 in Feb 1 05:03:37 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:03:37 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:03:37 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:03:37 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 1 05:03:37 localhost nova_compute[274317]: 2026-02-01 10:03:37.716 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:37 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:03:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:03:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/.meta.tmp' Feb 1 05:03:37 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/.meta.tmp' to config b'/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/.meta' Feb 1 05:03:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:03:37 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "format": "json"}]: dispatch Feb 1 05:03:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:03:37 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:03:38 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "13f167ab-ebf7-4b96-af44-07e5f74ed867_8c33dabf-d3cc-43d1-b075-e6959d64286e", "force": true, "format": "json"}]: dispatch Feb 1 05:03:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:13f167ab-ebf7-4b96-af44-07e5f74ed867_8c33dabf-d3cc-43d1-b075-e6959d64286e, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:38 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:38 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:13f167ab-ebf7-4b96-af44-07e5f74ed867_8c33dabf-d3cc-43d1-b075-e6959d64286e, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:38 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "13f167ab-ebf7-4b96-af44-07e5f74ed867", "force": true, "format": "json"}]: dispatch Feb 1 05:03:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:13f167ab-ebf7-4b96-af44-07e5f74ed867, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:38 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:38 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:13f167ab-ebf7-4b96-af44-07e5f74ed867, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 155 KiB/s wr, 11 op/s Feb 1 05:03:39 localhost nova_compute[274317]: 2026-02-01 10:03:39.262 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:03:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:03:39 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:03:39 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice_bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:03:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:39 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:39 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "637d1a6c-3835-4ba9-9fda-e6c8c27dede1", "new_size": 2147483648, "format": "json"}]: dispatch Feb 1 05:03:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, vol_name:cephfs) < "" Feb 1 05:03:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, vol_name:cephfs) < "" Feb 1 05:03:40 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:03:40 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:40 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:40 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 155 KiB/s wr, 11 op/s Feb 1 05:03:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f8deb5d1-795e-4dac-88f0-806d00540ce4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:03:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:41 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4/.meta.tmp' Feb 1 05:03:41 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4/.meta.tmp' to config b'/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4/.meta' Feb 1 05:03:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f8deb5d1-795e-4dac-88f0-806d00540ce4", "format": "json"}]: dispatch Feb 1 05:03:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:41 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "6f784fe6-beb9-4d74-808e-938471da4202", "format": "json"}]: dispatch Feb 1 05:03:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6f784fe6-beb9-4d74-808e-938471da4202, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:41 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6f784fe6-beb9-4d74-808e-938471da4202, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:03:41.779 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:03:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:03:41.780 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:03:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:03:41.780 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:03:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e285 e285: 6 total, 6 up, 6 in Feb 1 05:03:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 416 B/s rd, 107 KiB/s wr, 8 op/s Feb 1 05:03:42 localhost nova_compute[274317]: 2026-02-01 10:03:42.719 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:42 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:03:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:03:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:03:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 1 05:03:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:03:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:42 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:03:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:42 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:03:42 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:43 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e286 e286: 6 total, 6 up, 6 in Feb 1 05:03:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:03:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:03:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:03:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 1 05:03:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "637d1a6c-3835-4ba9-9fda-e6c8c27dede1", "format": "json"}]: dispatch Feb 1 05:03:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:43 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:03:43.262+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '637d1a6c-3835-4ba9-9fda-e6c8c27dede1' of type subvolume Feb 1 05:03:43 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '637d1a6c-3835-4ba9-9fda-e6c8c27dede1' of type subvolume Feb 1 05:03:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "637d1a6c-3835-4ba9-9fda-e6c8c27dede1", "force": true, "format": "json"}]: dispatch Feb 1 05:03:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, vol_name:cephfs) < "" Feb 1 05:03:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/637d1a6c-3835-4ba9-9fda-e6c8c27dede1'' moved to trashcan Feb 1 05:03:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:03:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:637d1a6c-3835-4ba9-9fda-e6c8c27dede1, vol_name:cephfs) < "" Feb 1 05:03:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:03:43 localhost systemd[1]: tmp-crun.QNQy6T.mount: Deactivated successfully. Feb 1 05:03:43 localhost podman[317447]: 2026-02-01 10:03:43.874491058 +0000 UTC m=+0.089172248 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2) Feb 1 05:03:43 localhost podman[317447]: 2026-02-01 10:03:43.884766296 +0000 UTC m=+0.099447516 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 1 05:03:43 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:03:44 localhost nova_compute[274317]: 2026-02-01 10:03:44.305 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 829 B/s rd, 208 KiB/s wr, 15 op/s Feb 1 05:03:44 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f8deb5d1-795e-4dac-88f0-806d00540ce4", "auth_id": "tempest-cephx-id-2018707573", "tenant_id": "8c7611c3d483414ea2f2b40e93062710", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:03:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:03:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:03:44 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:03:44 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID tempest-cephx-id-2018707573 with tenant 8c7611c3d483414ea2f2b40e93062710 Feb 1 05:03:44 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4/99a33b3d-5cc1-4f1d-bed0-5e5402dbd122", "osd", "allow rw pool=manila_data namespace=fsvolumens_f8deb5d1-795e-4dac-88f0-806d00540ce4", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:44 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4/99a33b3d-5cc1-4f1d-bed0-5e5402dbd122", "osd", "allow rw pool=manila_data namespace=fsvolumens_f8deb5d1-795e-4dac-88f0-806d00540ce4", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:44 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:03:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:45 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:03:45 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4/99a33b3d-5cc1-4f1d-bed0-5e5402dbd122", "osd", "allow rw pool=manila_data namespace=fsvolumens_f8deb5d1-795e-4dac-88f0-806d00540ce4", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:45 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4/99a33b3d-5cc1-4f1d-bed0-5e5402dbd122", "osd", "allow rw pool=manila_data namespace=fsvolumens_f8deb5d1-795e-4dac-88f0-806d00540ce4", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:45 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4/99a33b3d-5cc1-4f1d-bed0-5e5402dbd122", "osd", "allow rw pool=manila_data namespace=fsvolumens_f8deb5d1-795e-4dac-88f0-806d00540ce4", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:45 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "6f784fe6-beb9-4d74-808e-938471da4202_ad0c4f3d-50bd-4842-99d7-23d08be6e9c0", "force": true, "format": "json"}]: dispatch Feb 1 05:03:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f784fe6-beb9-4d74-808e-938471da4202_ad0c4f3d-50bd-4842-99d7-23d08be6e9c0, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f784fe6-beb9-4d74-808e-938471da4202_ad0c4f3d-50bd-4842-99d7-23d08be6e9c0, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:45 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "6f784fe6-beb9-4d74-808e-938471da4202", "force": true, "format": "json"}]: dispatch Feb 1 05:03:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f784fe6-beb9-4d74-808e-938471da4202, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:45 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f784fe6-beb9-4d74-808e-938471da4202, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:46 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:03:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:03:46 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:46 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:03:46 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:46 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:46 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:46 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:46 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:46 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 94 KiB/s wr, 7 op/s Feb 1 05:03:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:03:46 localhost podman[317467]: 2026-02-01 10:03:46.87168371 +0000 UTC m=+0.087932040 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 05:03:46 localhost podman[317467]: 2026-02-01 10:03:46.886690545 +0000 UTC m=+0.102938865 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 05:03:46 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:03:47 localhost nova_compute[274317]: 2026-02-01 10:03:47.723 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:47 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 05:03:47 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 05:03:47 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 05:03:47 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:03:47 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 05:03:48 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 9c4a265e-2d9e-43c5-81d0-15b176e4af64 (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:03:48 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 9c4a265e-2d9e-43c5-81d0-15b176e4af64 (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:03:48 localhost ceph-mgr[278126]: [progress INFO root] Completed event 9c4a265e-2d9e-43c5-81d0-15b176e4af64 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 05:03:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 05:03:48 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 05:03:48 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:03:48 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:03:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f8deb5d1-795e-4dac-88f0-806d00540ce4", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:03:48 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:03:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} v 0) Feb 1 05:03:48 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f8deb5d1-795e-4dac-88f0-806d00540ce4", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2018707573, client_metadata.root=/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4/99a33b3d-5cc1-4f1d-bed0-5e5402dbd122 Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 239 KiB/s wr, 16 op/s Feb 1 05:03:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f8deb5d1-795e-4dac-88f0-806d00540ce4", "format": "json"}]: dispatch Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:03:48.446+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f8deb5d1-795e-4dac-88f0-806d00540ce4' of type subvolume Feb 1 05:03:48 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f8deb5d1-795e-4dac-88f0-806d00540ce4' of type subvolume Feb 1 05:03:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f8deb5d1-795e-4dac-88f0-806d00540ce4", "force": true, "format": "json"}]: dispatch Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f8deb5d1-795e-4dac-88f0-806d00540ce4'' moved to trashcan Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f8deb5d1-795e-4dac-88f0-806d00540ce4, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-osd[32318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Feb 1 05:03:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "36e5267c-3b42-4026-8937-3923e0f02444", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:36e5267c-3b42-4026-8937-3923e0f02444, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/36e5267c-3b42-4026-8937-3923e0f02444/.meta.tmp' Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/36e5267c-3b42-4026-8937-3923e0f02444/.meta.tmp' to config b'/volumes/_nogroup/36e5267c-3b42-4026-8937-3923e0f02444/.meta' Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:36e5267c-3b42-4026-8937-3923e0f02444, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "36e5267c-3b42-4026-8937-3923e0f02444", "format": "json"}]: dispatch Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:36e5267c-3b42-4026-8937-3923e0f02444, vol_name:cephfs) < "" Feb 1 05:03:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:36e5267c-3b42-4026-8937-3923e0f02444, vol_name:cephfs) < "" Feb 1 05:03:49 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:03:49 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:03:49 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:03:49 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"}]': finished Feb 1 05:03:49 localhost nova_compute[274317]: 2026-02-01 10:03:49.340 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:49 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:03:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:03:49 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 1 05:03:49 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:49 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:03:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:49 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:03:49 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:49 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "ebc1e7ef-8442-4193-b15e-8f8ec54f9c3b", "format": "json"}]: dispatch Feb 1 05:03:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ebc1e7ef-8442-4193-b15e-8f8ec54f9c3b, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ebc1e7ef-8442-4193-b15e-8f8ec54f9c3b, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:50 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:50 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:50 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:50 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 1 05:03:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e287 e287: 6 total, 6 up, 6 in Feb 1 05:03:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 239 KiB/s wr, 16 op/s Feb 1 05:03:51 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1f645241-9977-49f9-af5c-e54bd4454730", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730/.meta.tmp' Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730/.meta.tmp' to config b'/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730/.meta' Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:51 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1f645241-9977-49f9-af5c-e54bd4454730", "format": "json"}]: dispatch Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:03:51 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 05:03:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 05:03:51 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "36e5267c-3b42-4026-8937-3923e0f02444", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch Feb 1 05:03:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:36e5267c-3b42-4026-8937-3923e0f02444, vol_name:cephfs) < "" Feb 1 05:03:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:36e5267c-3b42-4026-8937-3923e0f02444, vol_name:cephfs) < "" Feb 1 05:03:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e288 e288: 6 total, 6 up, 6 in Feb 1 05:03:52 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:03:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 145 KiB/s wr, 8 op/s Feb 1 05:03:52 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:03:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:03:52 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:52 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:03:52 localhost nova_compute[274317]: 2026-02-01 10:03:52.777 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:52 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:52.851332) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940232851462, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1053, "num_deletes": 253, "total_data_size": 1122006, "memory_usage": 1162800, "flush_reason": "Manual Compaction"} Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940232860501, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 726391, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30371, "largest_seqno": 31419, "table_properties": {"data_size": 721755, "index_size": 2107, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12343, "raw_average_key_size": 21, "raw_value_size": 711657, "raw_average_value_size": 1235, "num_data_blocks": 92, "num_entries": 576, "num_filter_entries": 576, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769940197, "oldest_key_time": 1769940197, "file_creation_time": 1769940232, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 9198 microseconds, and 4716 cpu microseconds. Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:52.860566) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 726391 bytes OK Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:52.860600) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:52.862464) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:52.862489) EVENT_LOG_v1 {"time_micros": 1769940232862482, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:52.862515) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1116446, prev total WAL file size 1116446, number of live WAL files 2. Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:52.863820) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(709KB)], [45(21MB)] Feb 1 05:03:52 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940232863894, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 23111303, "oldest_snapshot_seqno": -1} Feb 1 05:03:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 14164 keys, 21263630 bytes, temperature: kUnknown Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940233016094, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 21263630, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 21183088, "index_size": 44025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 380487, "raw_average_key_size": 26, "raw_value_size": 20942545, "raw_average_value_size": 1478, "num_data_blocks": 1634, "num_entries": 14164, "num_filter_entries": 14164, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769940232, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:53.016524) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 21263630 bytes Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:53.018374) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.7 rd, 139.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 21.3 +0.0 blob) out(20.3 +0.0 blob), read-write-amplify(61.1) write-amplify(29.3) OK, records in: 14692, records dropped: 528 output_compression: NoCompression Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:53.018403) EVENT_LOG_v1 {"time_micros": 1769940233018390, "job": 26, "event": "compaction_finished", "compaction_time_micros": 152332, "compaction_time_cpu_micros": 51993, "output_level": 6, "num_output_files": 1, "total_output_size": 21263630, "num_input_records": 14692, "num_output_records": 14164, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940233018644, "job": 26, "event": "table_file_deletion", "file_number": 47} Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940233021997, "job": 26, "event": "table_file_deletion", "file_number": 45} Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:52.863637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:53.022089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:53.022097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:53.022100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:53.022103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:03:53.022106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:03:53 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:53 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:53 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:53 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "ebc1e7ef-8442-4193-b15e-8f8ec54f9c3b_0036dad7-d892-418d-92b8-bb02442ec320", "force": true, "format": "json"}]: dispatch Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ebc1e7ef-8442-4193-b15e-8f8ec54f9c3b_0036dad7-d892-418d-92b8-bb02442ec320, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ebc1e7ef-8442-4193-b15e-8f8ec54f9c3b_0036dad7-d892-418d-92b8-bb02442ec320, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "ebc1e7ef-8442-4193-b15e-8f8ec54f9c3b", "force": true, "format": "json"}]: dispatch Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ebc1e7ef-8442-4193-b15e-8f8ec54f9c3b, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ebc1e7ef-8442-4193-b15e-8f8ec54f9c3b, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 887 B/s rd, 276 KiB/s wr, 18 op/s Feb 1 05:03:54 localhost nova_compute[274317]: 2026-02-01 10:03:54.511 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:54 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1f645241-9977-49f9-af5c-e54bd4454730", "auth_id": "tempest-cephx-id-2018707573", "tenant_id": "8c7611c3d483414ea2f2b40e93062710", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:03:54 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:03:54 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:03:54 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID tempest-cephx-id-2018707573 with tenant 8c7611c3d483414ea2f2b40e93062710 Feb 1 05:03:54 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730/afda1918-3e18-4669-abdc-aa0ca3b12659", "osd", "allow rw pool=manila_data namespace=fsvolumens_1f645241-9977-49f9-af5c-e54bd4454730", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:54 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730/afda1918-3e18-4669-abdc-aa0ca3b12659", "osd", "allow rw pool=manila_data namespace=fsvolumens_1f645241-9977-49f9-af5c-e54bd4454730", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:54 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:03:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:03:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "36e5267c-3b42-4026-8937-3923e0f02444", "format": "json"}]: dispatch Feb 1 05:03:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:36e5267c-3b42-4026-8937-3923e0f02444, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:36e5267c-3b42-4026-8937-3923e0f02444, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:03:55.309+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '36e5267c-3b42-4026-8937-3923e0f02444' of type subvolume Feb 1 05:03:55 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '36e5267c-3b42-4026-8937-3923e0f02444' of type subvolume Feb 1 05:03:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "36e5267c-3b42-4026-8937-3923e0f02444", "force": true, "format": "json"}]: dispatch Feb 1 05:03:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:36e5267c-3b42-4026-8937-3923e0f02444, vol_name:cephfs) < "" Feb 1 05:03:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/36e5267c-3b42-4026-8937-3923e0f02444'' moved to trashcan Feb 1 05:03:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:03:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:36e5267c-3b42-4026-8937-3923e0f02444, vol_name:cephfs) < "" Feb 1 05:03:55 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:03:55 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730/afda1918-3e18-4669-abdc-aa0ca3b12659", "osd", "allow rw pool=manila_data namespace=fsvolumens_1f645241-9977-49f9-af5c-e54bd4454730", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:55 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730/afda1918-3e18-4669-abdc-aa0ca3b12659", "osd", "allow rw pool=manila_data namespace=fsvolumens_1f645241-9977-49f9-af5c-e54bd4454730", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:55 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730/afda1918-3e18-4669-abdc-aa0ca3b12659", "osd", "allow rw pool=manila_data namespace=fsvolumens_1f645241-9977-49f9-af5c-e54bd4454730", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:03:56 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:03:56 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:03:56 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 1 05:03:56 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 634 B/s rd, 132 KiB/s wr, 9 op/s Feb 1 05:03:56 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:56 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:03:56 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:56 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:03:56 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:56 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:03:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e289 e289: 6 total, 6 up, 6 in Feb 1 05:03:57 localhost nova_compute[274317]: 2026-02-01 10:03:57.095 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:03:57 localhost nova_compute[274317]: 2026-02-01 10:03:57.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:03:57 localhost nova_compute[274317]: 2026-02-01 10:03:57.099 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 05:03:57 localhost nova_compute[274317]: 2026-02-01 10:03:57.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 05:03:57 localhost nova_compute[274317]: 2026-02-01 10:03:57.113 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 05:03:57 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:57 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:03:57 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:03:57 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 1 05:03:57 localhost nova_compute[274317]: 2026-02-01 10:03:57.832 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:57 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1f645241-9977-49f9-af5c-e54bd4454730", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:03:57 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:03:57 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:03:57 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} v 0) Feb 1 05:03:57 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1f645241-9977-49f9-af5c-e54bd4454730", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2018707573, client_metadata.root=/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730/afda1918-3e18-4669-abdc-aa0ca3b12659 Feb 1 05:03:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e290 e290: 6 total, 6 up, 6 in Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.123 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.123 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.123 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.124 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.124 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:03:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1f645241-9977-49f9-af5c-e54bd4454730", "format": "json"}]: dispatch Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1f645241-9977-49f9-af5c-e54bd4454730, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1f645241-9977-49f9-af5c-e54bd4454730, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:03:58.191+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1f645241-9977-49f9-af5c-e54bd4454730' of type subvolume Feb 1 05:03:58 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1f645241-9977-49f9-af5c-e54bd4454730' of type subvolume Feb 1 05:03:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1f645241-9977-49f9-af5c-e54bd4454730", "force": true, "format": "json"}]: dispatch Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1f645241-9977-49f9-af5c-e54bd4454730'' moved to trashcan Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1f645241-9977-49f9-af5c-e54bd4454730, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a3be12d6-b4dd-425b-b3bb-4918fb2827ad", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:03:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 358 KiB/s wr, 22 op/s Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a3be12d6-b4dd-425b-b3bb-4918fb2827ad, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:03:58 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:03:58 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:03:58 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"}]': finished Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a3be12d6-b4dd-425b-b3bb-4918fb2827ad/.meta.tmp' Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a3be12d6-b4dd-425b-b3bb-4918fb2827ad/.meta.tmp' to config b'/volumes/_nogroup/a3be12d6-b4dd-425b-b3bb-4918fb2827ad/.meta' Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a3be12d6-b4dd-425b-b3bb-4918fb2827ad, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a3be12d6-b4dd-425b-b3bb-4918fb2827ad", "format": "json"}]: dispatch Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a3be12d6-b4dd-425b-b3bb-4918fb2827ad, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a3be12d6-b4dd-425b-b3bb-4918fb2827ad, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:03:58 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/225147411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.633 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:03:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "696b3ce7-12ee-4387-9824-52b489c80aa5_20508bbb-3be1-454a-9a1a-2a3a9c3349a7", "force": true, "format": "json"}]: dispatch Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:696b3ce7-12ee-4387-9824-52b489c80aa5_20508bbb-3be1-454a-9a1a-2a3a9c3349a7, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.831 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:696b3ce7-12ee-4387-9824-52b489c80aa5_20508bbb-3be1-454a-9a1a-2a3a9c3349a7, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.832 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11500MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.832 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.832 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:03:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "snap_name": "696b3ce7-12ee-4387-9824-52b489c80aa5", "force": true, "format": "json"}]: dispatch Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:696b3ce7-12ee-4387-9824-52b489c80aa5, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta.tmp' to config b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf/.meta' Feb 1 05:03:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:696b3ce7-12ee-4387-9824-52b489c80aa5, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.899 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.899 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 05:03:58 localhost nova_compute[274317]: 2026-02-01 10:03:58.921 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:03:59 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:03:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:59 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:03:59 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:59 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:03:59 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:03:59 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4293205967' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:03:59 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:03:59 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:59 localhost nova_compute[274317]: 2026-02-01 10:03:59.379 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:03:59 localhost nova_compute[274317]: 2026-02-01 10:03:59.386 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 05:03:59 localhost nova_compute[274317]: 2026-02-01 10:03:59.407 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 05:03:59 localhost nova_compute[274317]: 2026-02-01 10:03:59.408 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 05:03:59 localhost nova_compute[274317]: 2026-02-01 10:03:59.409 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:03:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:03:59 localhost nova_compute[274317]: 2026-02-01 10:03:59.558 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:03:59 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:03:59 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:59 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:03:59 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:00 localhost podman[236852]: time="2026-02-01T10:04:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:04:00 localhost podman[236852]: @ - - [01/Feb/2026:10:04:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:04:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e291 e291: 6 total, 6 up, 6 in Feb 1 05:04:00 localhost podman[236852]: @ - - [01/Feb/2026:10:04:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18343 "" "Go-http-client/1.1" Feb 1 05:04:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 209 KiB/s wr, 12 op/s Feb 1 05:04:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4ee9c104-e931-4251-a599-f0ad33e4932d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:04:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:01 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d/.meta.tmp' Feb 1 05:04:01 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d/.meta.tmp' to config b'/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d/.meta' Feb 1 05:04:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4ee9c104-e931-4251-a599-f0ad33e4932d", "format": "json"}]: dispatch Feb 1 05:04:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:01 localhost nova_compute[274317]: 2026-02-01 10:04:01.409 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:04:01 localhost nova_compute[274317]: 2026-02-01 10:04:01.409 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:04:01 localhost nova_compute[274317]: 2026-02-01 10:04:01.410 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:04:01 localhost openstack_network_exporter[239388]: ERROR 10:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:04:01 localhost openstack_network_exporter[239388]: Feb 1 05:04:01 localhost openstack_network_exporter[239388]: ERROR 10:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:04:01 localhost openstack_network_exporter[239388]: Feb 1 05:04:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "format": "json"}]: dispatch Feb 1 05:04:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:04:02.005+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f2ab1bfc-ed8f-45cf-a782-901f372acfbf' of type subvolume Feb 1 05:04:02 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f2ab1bfc-ed8f-45cf-a782-901f372acfbf' of type subvolume Feb 1 05:04:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f2ab1bfc-ed8f-45cf-a782-901f372acfbf", "force": true, "format": "json"}]: dispatch Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f2ab1bfc-ed8f-45cf-a782-901f372acfbf'' moved to trashcan Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f2ab1bfc-ed8f-45cf-a782-901f372acfbf, vol_name:cephfs) < "" Feb 1 05:04:02 localhost nova_compute[274317]: 2026-02-01 10:04:02.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:04:02 localhost nova_compute[274317]: 2026-02-01 10:04:02.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 05:04:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a3be12d6-b4dd-425b-b3bb-4918fb2827ad", "format": "json"}]: dispatch Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a3be12d6-b4dd-425b-b3bb-4918fb2827ad, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a3be12d6-b4dd-425b-b3bb-4918fb2827ad, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:04:02.510+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a3be12d6-b4dd-425b-b3bb-4918fb2827ad' of type subvolume Feb 1 05:04:02 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a3be12d6-b4dd-425b-b3bb-4918fb2827ad' of type subvolume Feb 1 05:04:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 211 KiB/s wr, 15 op/s Feb 1 05:04:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a3be12d6-b4dd-425b-b3bb-4918fb2827ad", "force": true, "format": "json"}]: dispatch Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a3be12d6-b4dd-425b-b3bb-4918fb2827ad, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a3be12d6-b4dd-425b-b3bb-4918fb2827ad'' moved to trashcan Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a3be12d6-b4dd-425b-b3bb-4918fb2827ad, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:04:02 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 1 05:04:02 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:02 localhost nova_compute[274317]: 2026-02-01 10:04:02.867 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:03 localhost nova_compute[274317]: 2026-02-01 10:04:03.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:04:03 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:03 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:03 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:03 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 1 05:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:04:03 localhost podman[317628]: 2026-02-01 10:04:03.887503205 +0000 UTC m=+0.092662396 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 1 05:04:03 localhost podman[317628]: 2026-02-01 10:04:03.892519011 +0000 UTC m=+0.097678232 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 1 05:04:03 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:04:03 localhost podman[317629]: 2026-02-01 10:04:03.939702604 +0000 UTC m=+0.142757320 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 1 05:04:04 localhost podman[317633]: 2026-02-01 10:04:04.039930465 +0000 UTC m=+0.238798591 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 05:04:04 localhost podman[317629]: 2026-02-01 10:04:04.064965532 +0000 UTC m=+0.268020298 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller) Feb 1 05:04:04 localhost podman[317627]: 2026-02-01 10:04:04.016355084 +0000 UTC m=+0.225285353 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, io.openshift.expose-services=, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 05:04:04 localhost podman[317633]: 2026-02-01 10:04:04.073764724 +0000 UTC m=+0.272632830 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 05:04:04 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:04:04 localhost podman[317627]: 2026-02-01 10:04:04.101768293 +0000 UTC m=+0.310698552 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, release=1769056855, vendor=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 05:04:04 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:04:04 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:04:04 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4ee9c104-e931-4251-a599-f0ad33e4932d", "auth_id": "tempest-cephx-id-2018707573", "tenant_id": "8c7611c3d483414ea2f2b40e93062710", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:04:04 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:04 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:04 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:04 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID tempest-cephx-id-2018707573 with tenant 8c7611c3d483414ea2f2b40e93062710 Feb 1 05:04:04 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d/29d14b5f-8066-45c6-8379-ca06b2f0e0bc", "osd", "allow rw pool=manila_data namespace=fsvolumens_4ee9c104-e931-4251-a599-f0ad33e4932d", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:04 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d/29d14b5f-8066-45c6-8379-ca06b2f0e0bc", "osd", "allow rw pool=manila_data namespace=fsvolumens_4ee9c104-e931-4251-a599-f0ad33e4932d", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:04 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 959 B/s rd, 340 KiB/s wr, 21 op/s Feb 1 05:04:04 localhost nova_compute[274317]: 2026-02-01 10:04:04.559 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:05 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:05 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d/29d14b5f-8066-45c6-8379-ca06b2f0e0bc", "osd", "allow rw pool=manila_data namespace=fsvolumens_4ee9c104-e931-4251-a599-f0ad33e4932d", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:05 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d/29d14b5f-8066-45c6-8379-ca06b2f0e0bc", "osd", "allow rw pool=manila_data namespace=fsvolumens_4ee9c104-e931-4251-a599-f0ad33e4932d", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:05 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d/29d14b5f-8066-45c6-8379-ca06b2f0e0bc", "osd", "allow rw pool=manila_data namespace=fsvolumens_4ee9c104-e931-4251-a599-f0ad33e4932d", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a68d53cc-1ebe-4c8a-93d3-742bd1afa895", "format": "json"}]: dispatch Feb 1 05:04:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a68d53cc-1ebe-4c8a-93d3-742bd1afa895", "force": true, "format": "json"}]: dispatch Feb 1 05:04:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, vol_name:cephfs) < "" Feb 1 05:04:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a68d53cc-1ebe-4c8a-93d3-742bd1afa895'' moved to trashcan Feb 1 05:04:05 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a68d53cc-1ebe-4c8a-93d3-742bd1afa895, vol_name:cephfs) < "" Feb 1 05:04:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:04:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:04:05 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:05 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:04:06 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:06 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:06 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:06 localhost ovn_metadata_agent[158650]: 2026-02-01 10:04:06.437 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:04:06 localhost nova_compute[274317]: 2026-02-01 10:04:06.438 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:06 localhost ovn_metadata_agent[158650]: 2026-02-01 10:04:06.439 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 05:04:06 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:06 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:06 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:06 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 605 B/s rd, 152 KiB/s wr, 10 op/s Feb 1 05:04:07 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e292 e292: 6 total, 6 up, 6 in Feb 1 05:04:07 localhost nova_compute[274317]: 2026-02-01 10:04:07.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:04:07 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4ee9c104-e931-4251-a599-f0ad33e4932d", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:07 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:07 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:07 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} v 0) Feb 1 05:04:07 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:07 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4ee9c104-e931-4251-a599-f0ad33e4932d", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2018707573, client_metadata.root=/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d/29d14b5f-8066-45c6-8379-ca06b2f0e0bc Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:07 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4ee9c104-e931-4251-a599-f0ad33e4932d", "format": "json"}]: dispatch Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4ee9c104-e931-4251-a599-f0ad33e4932d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:07 localhost nova_compute[274317]: 2026-02-01 10:04:07.913 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4ee9c104-e931-4251-a599-f0ad33e4932d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:07 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:04:07.914+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ee9c104-e931-4251-a599-f0ad33e4932d' of type subvolume Feb 1 05:04:07 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ee9c104-e931-4251-a599-f0ad33e4932d' of type subvolume Feb 1 05:04:07 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4ee9c104-e931-4251-a599-f0ad33e4932d", "force": true, "format": "json"}]: dispatch Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4ee9c104-e931-4251-a599-f0ad33e4932d'' moved to trashcan Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ee9c104-e931-4251-a599-f0ad33e4932d, vol_name:cephfs) < "" Feb 1 05:04:08 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:08 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:08 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:08 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"}]': finished Feb 1 05:04:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 306 KiB/s wr, 18 op/s Feb 1 05:04:09 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b6cb6a1d-1311-44d1-b899-a95bbef2e51f", "snap_name": "4fa60cbb-7815-4e58-abbf-0715923dbf39_c889ce58-0d33-4136-87f7-ad74df7d884b", "force": true, "format": "json"}]: dispatch Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4fa60cbb-7815-4e58-abbf-0715923dbf39_c889ce58-0d33-4136-87f7-ad74df7d884b, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta' Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4fa60cbb-7815-4e58-abbf-0715923dbf39_c889ce58-0d33-4136-87f7-ad74df7d884b, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:04:09 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b6cb6a1d-1311-44d1-b899-a95bbef2e51f", "snap_name": "4fa60cbb-7815-4e58-abbf-0715923dbf39", "force": true, "format": "json"}]: dispatch Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4fa60cbb-7815-4e58-abbf-0715923dbf39, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta.tmp' to config b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f/.meta' Feb 1 05:04:09 localhost nova_compute[274317]: 2026-02-01 10:04:09.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4fa60cbb-7815-4e58-abbf-0715923dbf39, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:04:09 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:09 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:04:09 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:09 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 1 05:04:09 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:09 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:09 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:09 localhost nova_compute[274317]: 2026-02-01 10:04:09.599 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:10 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:10 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:10 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:10 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 1 05:04:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e293 e293: 6 total, 6 up, 6 in Feb 1 05:04:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:10 localhost ovn_metadata_agent[158650]: 2026-02-01 10:04:10.441 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 05:04:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 321 KiB/s wr, 17 op/s Feb 1 05:04:10 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "tenant_id": "8c7611c3d483414ea2f2b40e93062710", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:04:10 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:10 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:10 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID tempest-cephx-id-2018707573 with tenant 8c7611c3d483414ea2f2b40e93062710 Feb 1 05:04:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:10 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:11 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:11 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:11 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:11 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:12 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6cb6a1d-1311-44d1-b899-a95bbef2e51f", "format": "json"}]: dispatch Feb 1 05:04:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:12 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6cb6a1d-1311-44d1-b899-a95bbef2e51f' of type subvolume Feb 1 05:04:12 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:04:12.231+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6cb6a1d-1311-44d1-b899-a95bbef2e51f' of type subvolume Feb 1 05:04:12 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6cb6a1d-1311-44d1-b899-a95bbef2e51f", "force": true, "format": "json"}]: dispatch Feb 1 05:04:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:04:12 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b6cb6a1d-1311-44d1-b899-a95bbef2e51f'' moved to trashcan Feb 1 05:04:12 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6cb6a1d-1311-44d1-b899-a95bbef2e51f, vol_name:cephfs) < "" Feb 1 05:04:12 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:04:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:12 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:04:12 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:12 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice_bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:04:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 236 KiB/s wr, 14 op/s Feb 1 05:04:12 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:12 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:12 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:12 localhost nova_compute[274317]: 2026-02-01 10:04:12.956 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:13 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:13 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:13 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:13 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:14 localhost ceph-osd[31357]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Feb 1 05:04:14 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:14 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:14 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} v 0) Feb 1 05:04:14 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 329 KiB/s wr, 19 op/s Feb 1 05:04:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:14 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2018707573, client_metadata.root=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934 Feb 1 05:04:14 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:14 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:14 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:14 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"}]': finished Feb 1 05:04:14 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:14 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:14 localhost nova_compute[274317]: 2026-02-01 10:04:14.626 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:04:14 localhost systemd[1]: tmp-crun.Y9ccUe.mount: Deactivated successfully. Feb 1 05:04:14 localhost podman[317712]: 2026-02-01 10:04:14.878093176 +0000 UTC m=+0.094810223 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 1 05:04:14 localhost podman[317712]: 2026-02-01 10:04:14.889340235 +0000 UTC m=+0.106057202 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:04:14 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:04:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:04:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:04:15 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 1 05:04:15 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:04:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:04:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:04:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 278 KiB/s wr, 16 op/s Feb 1 05:04:16 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:04:16 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:16 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:04:16 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 1 05:04:17 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 e294: 6 total, 6 up, 6 in Feb 1 05:04:17 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "tenant_id": "8c7611c3d483414ea2f2b40e93062710", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:04:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:17 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:17 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:17 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID tempest-cephx-id-2018707573 with tenant 8c7611c3d483414ea2f2b40e93062710 Feb 1 05:04:17 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:17 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:04:17 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:17 localhost podman[317734]: 2026-02-01 10:04:17.872679699 +0000 UTC m=+0.085786673 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:04:17 localhost podman[317734]: 2026-02-01 10:04:17.886593291 +0000 UTC m=+0.099700245 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 05:04:17 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:04:18 localhost nova_compute[274317]: 2026-02-01 10:04:17.999 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:18 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:18 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:18 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:18 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 288 KiB/s wr, 18 op/s Feb 1 05:04:19 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:04:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:19 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:04:19 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:19 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice_bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:04:19 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:19 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:19 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:19 localhost nova_compute[274317]: 2026-02-01 10:04:19.676 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:20 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:20 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:20 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 242 KiB/s wr, 15 op/s Feb 1 05:04:20 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:20 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:21 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:21 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} v 0) Feb 1 05:04:21 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:21 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2018707573, client_metadata.root=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934 Feb 1 05:04:21 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:21 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:21 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:21 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"}]': finished Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_10:04:21 Feb 1 05:04:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 05:04:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 05:04:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['manila_data', 'backups', 'vms', '.mgr', 'images', 'manila_metadata', 'volumes'] Feb 1 05:04:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:04:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003255208333333333 quantized to 32 (current 32) Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:04:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0017890014307649358 of space, bias 4.0, pg target 1.424045138888889 quantized to 16 (current 16) Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:04:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:04:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:22 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:04:22 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:22 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 1 05:04:22 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 314 KiB/s wr, 17 op/s Feb 1 05:04:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "92c44af7-ac1a-42f2-8baf-64ce97a37c1c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c/.meta.tmp' Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c/.meta.tmp' to config b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c/.meta' Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:04:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "92c44af7-ac1a-42f2-8baf-64ce97a37c1c", "format": "json"}]: dispatch Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:04:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:04:23 localhost nova_compute[274317]: 2026-02-01 10:04:23.030 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:23 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:04:23 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:23 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:04:23 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 1 05:04:24 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "tenant_id": "8c7611c3d483414ea2f2b40e93062710", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:04:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:24 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:24 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID tempest-cephx-id-2018707573 with tenant 8c7611c3d483414ea2f2b40e93062710 Feb 1 05:04:24 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:24 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:24 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:24 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:24 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:24 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 240 KiB/s wr, 15 op/s Feb 1 05:04:24 localhost nova_compute[274317]: 2026-02-01 10:04:24.679 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:04:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:04:25 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:04:25 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:04:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:25 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:25 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "92c44af7-ac1a-42f2-8baf-64ce97a37c1c", "snap_name": "bdf5fb57-afe7-4827-9246-942a9223d37d", "format": "json"}]: dispatch Feb 1 05:04:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bdf5fb57-afe7-4827-9246-942a9223d37d, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:04:25 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bdf5fb57-afe7-4827-9246-942a9223d37d, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:04:26 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:04:26 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:26 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:26 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 240 KiB/s wr, 15 op/s Feb 1 05:04:28 localhost nova_compute[274317]: 2026-02-01 10:04:28.064 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:28 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:28 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} v 0) Feb 1 05:04:28 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:28 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2018707573, client_metadata.root=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934 Feb 1 05:04:28 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:28 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 625 B/s rd, 296 KiB/s wr, 18 op/s Feb 1 05:04:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:29 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:29 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:29 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:29 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"}]': finished Feb 1 05:04:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:04:29 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:04:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 1 05:04:29 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:29 localhost nova_compute[274317]: 2026-02-01 10:04:29.726 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ae161acb-9a7a-4b27-a3e8-29a643bfd153", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ae161acb-9a7a-4b27-a3e8-29a643bfd153, vol_name:cephfs) < "" Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ae161acb-9a7a-4b27-a3e8-29a643bfd153/.meta.tmp' Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ae161acb-9a7a-4b27-a3e8-29a643bfd153/.meta.tmp' to config b'/volumes/_nogroup/ae161acb-9a7a-4b27-a3e8-29a643bfd153/.meta' Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ae161acb-9a7a-4b27-a3e8-29a643bfd153, vol_name:cephfs) < "" Feb 1 05:04:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ae161acb-9a7a-4b27-a3e8-29a643bfd153", "format": "json"}]: dispatch Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ae161acb-9a7a-4b27-a3e8-29a643bfd153, vol_name:cephfs) < "" Feb 1 05:04:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ae161acb-9a7a-4b27-a3e8-29a643bfd153, vol_name:cephfs) < "" Feb 1 05:04:30 localhost podman[236852]: time="2026-02-01T10:04:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:04:30 localhost podman[236852]: @ - - [01/Feb/2026:10:04:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:04:30 localhost podman[236852]: @ - - [01/Feb/2026:10:04:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18342 "" "Go-http-client/1.1" Feb 1 05:04:30 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:04:30 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:04:30 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:04:30 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 1 05:04:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 192 KiB/s wr, 12 op/s Feb 1 05:04:31 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "tenant_id": "8c7611c3d483414ea2f2b40e93062710", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:04:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:31 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:31 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID tempest-cephx-id-2018707573 with tenant 8c7611c3d483414ea2f2b40e93062710 Feb 1 05:04:31 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:31 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:31 localhost openstack_network_exporter[239388]: ERROR 10:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:04:31 localhost openstack_network_exporter[239388]: Feb 1 05:04:31 localhost openstack_network_exporter[239388]: ERROR 10:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:04:31 localhost openstack_network_exporter[239388]: Feb 1 05:04:31 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume authorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, tenant_id:8c7611c3d483414ea2f2b40e93062710, vol_name:cephfs) < "" Feb 1 05:04:32 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:32 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:32 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:32 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2018707573", "caps": ["mds", "allow rw path=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934", "osd", "allow rw pool=manila_data namespace=fsvolumens_976e5581-4212-481d-a9bc-03c631888d9c", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:32 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:04:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:04:32 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:04:32 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:04:32 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:32 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:32 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 325 KiB/s wr, 19 op/s Feb 1 05:04:33 localhost nova_compute[274317]: 2026-02-01 10:04:33.111 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:33 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:04:33 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:33 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:33 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ae161acb-9a7a-4b27-a3e8-29a643bfd153", "format": "json"}]: dispatch Feb 1 05:04:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ae161acb-9a7a-4b27-a3e8-29a643bfd153, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ae161acb-9a7a-4b27-a3e8-29a643bfd153, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:33 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:04:33.175+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ae161acb-9a7a-4b27-a3e8-29a643bfd153' of type subvolume Feb 1 05:04:33 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ae161acb-9a7a-4b27-a3e8-29a643bfd153' of type subvolume Feb 1 05:04:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ae161acb-9a7a-4b27-a3e8-29a643bfd153", "force": true, "format": "json"}]: dispatch Feb 1 05:04:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ae161acb-9a7a-4b27-a3e8-29a643bfd153, vol_name:cephfs) < "" Feb 1 05:04:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ae161acb-9a7a-4b27-a3e8-29a643bfd153'' moved to trashcan Feb 1 05:04:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ae161acb-9a7a-4b27-a3e8-29a643bfd153, vol_name:cephfs) < "" Feb 1 05:04:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 216 KiB/s wr, 14 op/s Feb 1 05:04:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:04:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1496006481' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:04:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:04:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1496006481' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:04:34 localhost nova_compute[274317]: 2026-02-01 10:04:34.730 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:04:34 localhost podman[317762]: 2026-02-01 10:04:34.878551583 +0000 UTC m=+0.089973693 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Feb 1 05:04:34 localhost podman[317762]: 2026-02-01 10:04:34.88780762 +0000 UTC m=+0.099229790 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 05:04:34 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:04:34 localhost podman[317761]: 2026-02-01 10:04:34.933032574 +0000 UTC m=+0.145090894 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, release=1769056855, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-01-22T05:09:47Z, managed_by=edpm_ansible, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, distribution-scope=public, io.openshift.tags=minimal rhel9) Feb 1 05:04:34 localhost podman[317761]: 2026-02-01 10:04:34.949637419 +0000 UTC m=+0.161695769 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2026-01-22T05:09:47Z, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, release=1769056855, container_name=openstack_network_exporter, distribution-scope=public, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 1 05:04:34 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:04:35 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:35 localhost podman[317763]: 2026-02-01 10:04:35.045786232 +0000 UTC m=+0.256855751 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Feb 1 05:04:35 localhost podman[317763]: 2026-02-01 10:04:35.093340128 +0000 UTC m=+0.304409647 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:04:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} v 0) Feb 1 05:04:35 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} v 0) Feb 1 05:04:35 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:35 localhost podman[317764]: 2026-02-01 10:04:35.101974566 +0000 UTC m=+0.299058401 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 05:04:35 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:04:35 localhost podman[317764]: 2026-02-01 10:04:35.136533799 +0000 UTC m=+0.333617624 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume deauthorize, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:35 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "auth_id": "tempest-cephx-id-2018707573", "format": "json"}]: dispatch Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2018707573, client_metadata.root=/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c/c5a01dcc-4cc5-42a5-9fd3-97ead336e934 Feb 1 05:04:35 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:04:35 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:35 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2018707573", "format": "json"} : dispatch Feb 1 05:04:35 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"} : dispatch Feb 1 05:04:35 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2018707573"}]': finished Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2018707573, format:json, prefix:fs subvolume evict, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:35 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:04:35 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:04:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 1 05:04:35 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:35 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:35 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:36 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:04:36 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:04:36 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:04:36 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 1 05:04:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 215 KiB/s wr, 12 op/s Feb 1 05:04:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c2ffc46e-0111-4c7c-9786-7a03cdade368", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:04:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:c2ffc46e-0111-4c7c-9786-7a03cdade368, vol_name:cephfs) < "" Feb 1 05:04:36 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2ffc46e-0111-4c7c-9786-7a03cdade368/.meta.tmp' Feb 1 05:04:36 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2ffc46e-0111-4c7c-9786-7a03cdade368/.meta.tmp' to config b'/volumes/_nogroup/c2ffc46e-0111-4c7c-9786-7a03cdade368/.meta' Feb 1 05:04:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:c2ffc46e-0111-4c7c-9786-7a03cdade368, vol_name:cephfs) < "" Feb 1 05:04:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c2ffc46e-0111-4c7c-9786-7a03cdade368", "format": "json"}]: dispatch Feb 1 05:04:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2ffc46e-0111-4c7c-9786-7a03cdade368, vol_name:cephfs) < "" Feb 1 05:04:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2ffc46e-0111-4c7c-9786-7a03cdade368, vol_name:cephfs) < "" Feb 1 05:04:38 localhost nova_compute[274317]: 2026-02-01 10:04:38.159 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 287 KiB/s wr, 17 op/s Feb 1 05:04:38 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "976e5581-4212-481d-a9bc-03c631888d9c", "format": "json"}]: dispatch Feb 1 05:04:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:976e5581-4212-481d-a9bc-03c631888d9c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:976e5581-4212-481d-a9bc-03c631888d9c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:38 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '976e5581-4212-481d-a9bc-03c631888d9c' of type subvolume Feb 1 05:04:38 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:04:38.798+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '976e5581-4212-481d-a9bc-03c631888d9c' of type subvolume Feb 1 05:04:38 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "976e5581-4212-481d-a9bc-03c631888d9c", "force": true, "format": "json"}]: dispatch Feb 1 05:04:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:38 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/976e5581-4212-481d-a9bc-03c631888d9c'' moved to trashcan Feb 1 05:04:38 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:976e5581-4212-481d-a9bc-03c631888d9c, vol_name:cephfs) < "" Feb 1 05:04:38 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:04:38 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:04:38 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:38 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:04:38 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:38 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:39 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:39 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:39 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:39 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:39 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:39 localhost nova_compute[274317]: 2026-02-01 10:04:39.759 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c2ffc46e-0111-4c7c-9786-7a03cdade368", "format": "json"}]: dispatch Feb 1 05:04:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c2ffc46e-0111-4c7c-9786-7a03cdade368, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c2ffc46e-0111-4c7c-9786-7a03cdade368, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:40 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:04:40.133+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2ffc46e-0111-4c7c-9786-7a03cdade368' of type subvolume Feb 1 05:04:40 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2ffc46e-0111-4c7c-9786-7a03cdade368' of type subvolume Feb 1 05:04:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c2ffc46e-0111-4c7c-9786-7a03cdade368", "force": true, "format": "json"}]: dispatch Feb 1 05:04:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2ffc46e-0111-4c7c-9786-7a03cdade368, vol_name:cephfs) < "" Feb 1 05:04:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c2ffc46e-0111-4c7c-9786-7a03cdade368'' moved to trashcan Feb 1 05:04:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2ffc46e-0111-4c7c-9786-7a03cdade368, vol_name:cephfs) < "" Feb 1 05:04:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 205 KiB/s wr, 11 op/s Feb 1 05:04:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:04:41.779 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:04:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:04:41.780 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:04:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:04:41.780 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:04:42 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:04:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:04:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:42 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 1 05:04:42 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:42 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:04:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:42 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:04:42 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:42 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 298 KiB/s wr, 16 op/s Feb 1 05:04:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:43 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:43 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 1 05:04:43 localhost nova_compute[274317]: 2026-02-01 10:04:43.205 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "89116974-4a6c-4049-85b6-edf55e1de63b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:04:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:89116974-4a6c-4049-85b6-edf55e1de63b, vol_name:cephfs) < "" Feb 1 05:04:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/89116974-4a6c-4049-85b6-edf55e1de63b/.meta.tmp' Feb 1 05:04:43 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/89116974-4a6c-4049-85b6-edf55e1de63b/.meta.tmp' to config b'/volumes/_nogroup/89116974-4a6c-4049-85b6-edf55e1de63b/.meta' Feb 1 05:04:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:89116974-4a6c-4049-85b6-edf55e1de63b, vol_name:cephfs) < "" Feb 1 05:04:43 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "89116974-4a6c-4049-85b6-edf55e1de63b", "format": "json"}]: dispatch Feb 1 05:04:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:89116974-4a6c-4049-85b6-edf55e1de63b, vol_name:cephfs) < "" Feb 1 05:04:43 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:89116974-4a6c-4049-85b6-edf55e1de63b, vol_name:cephfs) < "" Feb 1 05:04:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 166 KiB/s wr, 10 op/s Feb 1 05:04:44 localhost nova_compute[274317]: 2026-02-01 10:04:44.758 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:45 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:04:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:04:45 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:45 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:04:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:45 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:04:45 localhost podman[317847]: 2026-02-01 10:04:45.875843908 +0000 UTC m=+0.088910280 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Feb 1 05:04:45 localhost podman[317847]: 2026-02-01 10:04:45.915733856 +0000 UTC m=+0.128800268 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:04:45 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:04:46 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:46 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:46 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:46 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 165 KiB/s wr, 9 op/s Feb 1 05:04:46 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "89116974-4a6c-4049-85b6-edf55e1de63b", "format": "json"}]: dispatch Feb 1 05:04:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:89116974-4a6c-4049-85b6-edf55e1de63b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:89116974-4a6c-4049-85b6-edf55e1de63b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:46 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:04:46.688+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '89116974-4a6c-4049-85b6-edf55e1de63b' of type subvolume Feb 1 05:04:46 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '89116974-4a6c-4049-85b6-edf55e1de63b' of type subvolume Feb 1 05:04:46 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "89116974-4a6c-4049-85b6-edf55e1de63b", "force": true, "format": "json"}]: dispatch Feb 1 05:04:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:89116974-4a6c-4049-85b6-edf55e1de63b, vol_name:cephfs) < "" Feb 1 05:04:46 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/89116974-4a6c-4049-85b6-edf55e1de63b'' moved to trashcan Feb 1 05:04:46 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:89116974-4a6c-4049-85b6-edf55e1de63b, vol_name:cephfs) < "" Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.088342) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940287088388, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1529, "num_deletes": 261, "total_data_size": 2712974, "memory_usage": 2844272, "flush_reason": "Manual Compaction"} Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940287105041, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1775829, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31424, "largest_seqno": 32948, "table_properties": {"data_size": 1769395, "index_size": 3391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16685, "raw_average_key_size": 21, "raw_value_size": 1755307, "raw_average_value_size": 2244, "num_data_blocks": 147, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769940233, "oldest_key_time": 1769940233, "file_creation_time": 1769940287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 16746 microseconds, and 6130 cpu microseconds. Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.105089) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1775829 bytes OK Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.105112) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.107991) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.108011) EVENT_LOG_v1 {"time_micros": 1769940287108005, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.108031) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2705142, prev total WAL file size 2705466, number of live WAL files 2. Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.108897) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323634' seq:72057594037927935, type:22 .. '6C6F676D0034353136' seq:0, type:0; will stop at (end) Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1734KB)], [48(20MB)] Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940287108971, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 23039459, "oldest_snapshot_seqno": -1} Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 14401 keys, 22821427 bytes, temperature: kUnknown Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940287241984, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 22821427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 22737588, "index_size": 46746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 387409, "raw_average_key_size": 26, "raw_value_size": 22491272, "raw_average_value_size": 1561, "num_data_blocks": 1741, "num_entries": 14401, "num_filter_entries": 14401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769940287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.242370) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 22821427 bytes Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.244160) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.1 rd, 171.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 20.3 +0.0 blob) out(21.8 +0.0 blob), read-write-amplify(25.8) write-amplify(12.9) OK, records in: 14946, records dropped: 545 output_compression: NoCompression Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.244186) EVENT_LOG_v1 {"time_micros": 1769940287244175, "job": 28, "event": "compaction_finished", "compaction_time_micros": 133095, "compaction_time_cpu_micros": 58371, "output_level": 6, "num_output_files": 1, "total_output_size": 22821427, "num_input_records": 14946, "num_output_records": 14401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940287244590, "job": 28, "event": "table_file_deletion", "file_number": 50} Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940287247547, "job": 28, "event": "table_file_deletion", "file_number": 48} Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.108741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.247608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.247614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.247618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.247621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:04:47 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:04:47.247625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:04:48 localhost nova_compute[274317]: 2026-02-01 10:04:48.259 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:04:48 localhost podman[317885]: 2026-02-01 10:04:48.463980219 +0000 UTC m=+0.085365679 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 1 05:04:48 localhost podman[317885]: 2026-02-01 10:04:48.469317515 +0000 UTC m=+0.090702995 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 05:04:48 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:04:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 227 KiB/s wr, 13 op/s Feb 1 05:04:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:04:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 1 05:04:48 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:48 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 1 05:04:48 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:48 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice", "format": "json"}]: dispatch Feb 1 05:04:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:04:48 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:48 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:49 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:49 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 1 05:04:49 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 1 05:04:49 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 1 05:04:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 05:04:49 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 05:04:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 05:04:49 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:04:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 05:04:49 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev 2df47517-d091-4316-8057-ca155b0f19e8 (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:04:49 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev 2df47517-d091-4316-8057-ca155b0f19e8 (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:04:49 localhost ceph-mgr[278126]: [progress INFO root] Completed event 2df47517-d091-4316-8057-ca155b0f19e8 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 05:04:49 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 05:04:49 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 05:04:49 localhost nova_compute[274317]: 2026-02-01 10:04:49.797 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:49 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0f0ccb95-b867-4b25-96e6-6d413cd0de0c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:04:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0f0ccb95-b867-4b25-96e6-6d413cd0de0c, vol_name:cephfs) < "" Feb 1 05:04:49 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0f0ccb95-b867-4b25-96e6-6d413cd0de0c/.meta.tmp' Feb 1 05:04:49 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0f0ccb95-b867-4b25-96e6-6d413cd0de0c/.meta.tmp' to config b'/volumes/_nogroup/0f0ccb95-b867-4b25-96e6-6d413cd0de0c/.meta' Feb 1 05:04:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0f0ccb95-b867-4b25-96e6-6d413cd0de0c, vol_name:cephfs) < "" Feb 1 05:04:49 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0f0ccb95-b867-4b25-96e6-6d413cd0de0c", "format": "json"}]: dispatch Feb 1 05:04:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0f0ccb95-b867-4b25-96e6-6d413cd0de0c, vol_name:cephfs) < "" Feb 1 05:04:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0f0ccb95-b867-4b25-96e6-6d413cd0de0c, vol_name:cephfs) < "" Feb 1 05:04:50 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:04:50 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:04:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 155 KiB/s wr, 8 op/s Feb 1 05:04:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:04:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:04:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:04:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:04:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:04:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:04:51 localhost ovn_controller[152787]: 2026-02-01T10:04:51Z|00263|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Feb 1 05:04:51 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 05:04:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 05:04:51 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:04:51 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:04:51 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:51 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice_bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:04:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:51 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 237 KiB/s wr, 12 op/s Feb 1 05:04:52 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:04:52 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:52 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:52 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:52 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:53 localhost nova_compute[274317]: 2026-02-01 10:04:53.302 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:53 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0f0ccb95-b867-4b25-96e6-6d413cd0de0c", "format": "json"}]: dispatch Feb 1 05:04:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0f0ccb95-b867-4b25-96e6-6d413cd0de0c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0f0ccb95-b867-4b25-96e6-6d413cd0de0c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:04:53 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:04:53.327+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0f0ccb95-b867-4b25-96e6-6d413cd0de0c' of type subvolume Feb 1 05:04:53 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0f0ccb95-b867-4b25-96e6-6d413cd0de0c' of type subvolume Feb 1 05:04:53 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0f0ccb95-b867-4b25-96e6-6d413cd0de0c", "force": true, "format": "json"}]: dispatch Feb 1 05:04:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0f0ccb95-b867-4b25-96e6-6d413cd0de0c, vol_name:cephfs) < "" Feb 1 05:04:53 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0f0ccb95-b867-4b25-96e6-6d413cd0de0c'' moved to trashcan Feb 1 05:04:53 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:04:53 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0f0ccb95-b867-4b25-96e6-6d413cd0de0c, vol_name:cephfs) < "" Feb 1 05:04:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 145 KiB/s wr, 9 op/s Feb 1 05:04:54 localhost nova_compute[274317]: 2026-02-01 10:04:54.801 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:04:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:04:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:04:55 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 1 05:04:55 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:04:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:04:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:04:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:04:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:04:56 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:04:56 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:56 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:04:56 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 1 05:04:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 144 KiB/s wr, 8 op/s Feb 1 05:04:56 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:04:56 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8, vol_name:cephfs) < "" Feb 1 05:04:57 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8/.meta.tmp' Feb 1 05:04:57 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8/.meta.tmp' to config b'/volumes/_nogroup/05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8/.meta' Feb 1 05:04:57 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8, vol_name:cephfs) < "" Feb 1 05:04:57 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8", "format": "json"}]: dispatch Feb 1 05:04:57 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8, vol_name:cephfs) < "" Feb 1 05:04:57 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8, vol_name:cephfs) < "" Feb 1 05:04:58 localhost nova_compute[274317]: 2026-02-01 10:04:58.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:04:58 localhost nova_compute[274317]: 2026-02-01 10:04:58.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:04:58 localhost nova_compute[274317]: 2026-02-01 10:04:58.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 05:04:58 localhost nova_compute[274317]: 2026-02-01 10:04:58.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 05:04:58 localhost nova_compute[274317]: 2026-02-01 10:04:58.116 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 05:04:58 localhost nova_compute[274317]: 2026-02-01 10:04:58.345 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:58 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:04:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:04:58 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:58 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice_bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:04:58 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:04:58 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:58 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:04:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 198 KiB/s wr, 11 op/s Feb 1 05:04:59 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:04:59 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:59 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:04:59 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:04:59 localhost nova_compute[274317]: 2026-02-01 10:04:59.831 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:04:59 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8", "format": "json"}]: dispatch Feb 1 05:04:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:00 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:00.003+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8' of type subvolume Feb 1 05:05:00 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8' of type subvolume Feb 1 05:05:00 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8", "force": true, "format": "json"}]: dispatch Feb 1 05:05:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8, vol_name:cephfs) < "" Feb 1 05:05:00 localhost podman[236852]: time="2026-02-01T10:05:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:05:00 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8'' moved to trashcan Feb 1 05:05:00 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:00 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05cbe550-ebf5-4e9b-a9e6-4d0d3e6763c8, vol_name:cephfs) < "" Feb 1 05:05:00 localhost podman[236852]: @ - - [01/Feb/2026:10:05:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:05:00 localhost podman[236852]: @ - - [01/Feb/2026:10:05:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18344 "" "Go-http-client/1.1" Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.121 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.122 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.122 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.123 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.124 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:05:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 136 KiB/s wr, 7 op/s Feb 1 05:05:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:05:00 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3351195153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.569 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.775 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.777 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11485MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.777 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.777 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.840 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.840 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 05:05:00 localhost nova_compute[274317]: 2026-02-01 10:05:00.853 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:05:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:05:01 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3029507846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:05:01 localhost nova_compute[274317]: 2026-02-01 10:05:01.302 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:05:01 localhost nova_compute[274317]: 2026-02-01 10:05:01.307 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 05:05:01 localhost nova_compute[274317]: 2026-02-01 10:05:01.322 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 05:05:01 localhost nova_compute[274317]: 2026-02-01 10:05:01.323 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 05:05:01 localhost nova_compute[274317]: 2026-02-01 10:05:01.323 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:05:01 localhost openstack_network_exporter[239388]: ERROR 10:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:05:01 localhost openstack_network_exporter[239388]: Feb 1 05:05:01 localhost openstack_network_exporter[239388]: ERROR 10:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:05:01 localhost openstack_network_exporter[239388]: Feb 1 05:05:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:05:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 1 05:05:01 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:05:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 1 05:05:01 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:05:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:01 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 1 05:05:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:01 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:05:01 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:05:01 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:02 localhost nova_compute[274317]: 2026-02-01 10:05:02.322 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:05:02 localhost nova_compute[274317]: 2026-02-01 10:05:02.323 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:05:02 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:05:02 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 1 05:05:02 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 1 05:05:02 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 1 05:05:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 216 KiB/s wr, 11 op/s Feb 1 05:05:03 localhost nova_compute[274317]: 2026-02-01 10:05:03.391 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:05:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:05:03 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "92c44af7-ac1a-42f2-8baf-64ce97a37c1c", "snap_name": "bdf5fb57-afe7-4827-9246-942a9223d37d_6da53b8d-dd56-457c-aa1b-d2c1e64390fb", "force": true, "format": "json"}]: dispatch Feb 1 05:05:03 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bdf5fb57-afe7-4827-9246-942a9223d37d_6da53b8d-dd56-457c-aa1b-d2c1e64390fb, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:05:03 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c/.meta.tmp' Feb 1 05:05:03 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c/.meta.tmp' to config b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c/.meta' Feb 1 05:05:03 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bdf5fb57-afe7-4827-9246-942a9223d37d_6da53b8d-dd56-457c-aa1b-d2c1e64390fb, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:05:03 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "92c44af7-ac1a-42f2-8baf-64ce97a37c1c", "snap_name": "bdf5fb57-afe7-4827-9246-942a9223d37d", "force": true, "format": "json"}]: dispatch Feb 1 05:05:03 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bdf5fb57-afe7-4827-9246-942a9223d37d, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:05:03 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c/.meta.tmp' Feb 1 05:05:03 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c/.meta.tmp' to config b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c/.meta' Feb 1 05:05:03 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bdf5fb57-afe7-4827-9246-942a9223d37d, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:05:04 localhost nova_compute[274317]: 2026-02-01 10:05:04.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:05:04 localhost nova_compute[274317]: 2026-02-01 10:05:04.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 05:05:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 135 KiB/s wr, 8 op/s Feb 1 05:05:04 localhost nova_compute[274317]: 2026-02-01 10:05:04.833 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:04 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:05:04 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:05:04 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:05:04 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:05:04 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:05:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:05:05 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:05:05 localhost nova_compute[274317]: 2026-02-01 10:05:05.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:05:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:05:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:05 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:05:05 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:05:05 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:05:05 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:05:05 localhost podman[318023]: 2026-02-01 10:05:05.888183704 +0000 UTC m=+0.094198785 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Feb 1 05:05:05 localhost podman[318023]: 2026-02-01 10:05:05.929602709 +0000 UTC m=+0.135617720 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true) Feb 1 05:05:05 localhost systemd[1]: tmp-crun.wIyqr7.mount: Deactivated successfully. Feb 1 05:05:05 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:05:05 localhost podman[318022]: 2026-02-01 10:05:05.948803164 +0000 UTC m=+0.158222950 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=9.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter) Feb 1 05:05:06 localhost podman[318025]: 2026-02-01 10:05:06.003351807 +0000 UTC m=+0.203598389 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 05:05:06 localhost podman[318024]: 2026-02-01 10:05:06.02342158 +0000 UTC m=+0.227396427 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 05:05:06 localhost podman[318025]: 2026-02-01 10:05:06.038653023 +0000 UTC m=+0.238899585 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 05:05:06 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:05:06 localhost podman[318024]: 2026-02-01 10:05:06.08016296 +0000 UTC m=+0.284137767 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller) Feb 1 05:05:06 localhost podman[318022]: 2026-02-01 10:05:06.090495491 +0000 UTC m=+0.299915277 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-01-22T05:09:47Z, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z) Feb 1 05:05:06 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:05:06 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:05:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 134 KiB/s wr, 7 op/s Feb 1 05:05:06 localhost systemd[1]: tmp-crun.Ygo2b1.mount: Deactivated successfully. Feb 1 05:05:07 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "92c44af7-ac1a-42f2-8baf-64ce97a37c1c", "format": "json"}]: dispatch Feb 1 05:05:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:07 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:07.171+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '92c44af7-ac1a-42f2-8baf-64ce97a37c1c' of type subvolume Feb 1 05:05:07 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '92c44af7-ac1a-42f2-8baf-64ce97a37c1c' of type subvolume Feb 1 05:05:07 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "92c44af7-ac1a-42f2-8baf-64ce97a37c1c", "force": true, "format": "json"}]: dispatch Feb 1 05:05:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:05:07 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/92c44af7-ac1a-42f2-8baf-64ce97a37c1c'' moved to trashcan Feb 1 05:05:07 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:07 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:92c44af7-ac1a-42f2-8baf-64ce97a37c1c, vol_name:cephfs) < "" Feb 1 05:05:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "40dc17a8-88e2-443b-88e0-b305a0120dd3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:40dc17a8-88e2-443b-88e0-b305a0120dd3, vol_name:cephfs) < "" Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/40dc17a8-88e2-443b-88e0-b305a0120dd3/.meta.tmp' Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/40dc17a8-88e2-443b-88e0-b305a0120dd3/.meta.tmp' to config b'/volumes/_nogroup/40dc17a8-88e2-443b-88e0-b305a0120dd3/.meta' Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:40dc17a8-88e2-443b-88e0-b305a0120dd3, vol_name:cephfs) < "" Feb 1 05:05:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "40dc17a8-88e2-443b-88e0-b305a0120dd3", "format": "json"}]: dispatch Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:40dc17a8-88e2-443b-88e0-b305a0120dd3, vol_name:cephfs) < "" Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:40dc17a8-88e2-443b-88e0-b305a0120dd3, vol_name:cephfs) < "" Feb 1 05:05:08 localhost nova_compute[274317]: 2026-02-01 10:05:08.418 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 197 KiB/s wr, 11 op/s Feb 1 05:05:08 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:05:08 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:05:08 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 1 05:05:08 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:05:08 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:05:08 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:05:08 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:05:08 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:05:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:09 localhost nova_compute[274317]: 2026-02-01 10:05:09.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:05:09 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e295 e295: 6 total, 6 up, 6 in Feb 1 05:05:09 localhost nova_compute[274317]: 2026-02-01 10:05:09.874 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 171 KiB/s wr, 9 op/s Feb 1 05:05:11 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:11.411 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:05:11 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:11.412 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 05:05:11 localhost nova_compute[274317]: 2026-02-01 10:05:11.413 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:11 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:11.414 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 05:05:11 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "r", "format": "json"}]: dispatch Feb 1 05:05:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:05:11 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:05:11 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:05:11 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID alice bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:05:11 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:05:11 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:05:11 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:05:11 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:05:11 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:05:11 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow r pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:05:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:05:11 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1281be2a-d93e-437b-9b63-ac349c9cd9d6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:05:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1281be2a-d93e-437b-9b63-ac349c9cd9d6, vol_name:cephfs) < "" Feb 1 05:05:11 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1281be2a-d93e-437b-9b63-ac349c9cd9d6/.meta.tmp' Feb 1 05:05:11 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1281be2a-d93e-437b-9b63-ac349c9cd9d6/.meta.tmp' to config b'/volumes/_nogroup/1281be2a-d93e-437b-9b63-ac349c9cd9d6/.meta' Feb 1 05:05:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1281be2a-d93e-437b-9b63-ac349c9cd9d6, vol_name:cephfs) < "" Feb 1 05:05:11 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1281be2a-d93e-437b-9b63-ac349c9cd9d6", "format": "json"}]: dispatch Feb 1 05:05:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1281be2a-d93e-437b-9b63-ac349c9cd9d6, vol_name:cephfs) < "" Feb 1 05:05:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1281be2a-d93e-437b-9b63-ac349c9cd9d6, vol_name:cephfs) < "" Feb 1 05:05:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 172 KiB/s wr, 10 op/s Feb 1 05:05:13 localhost nova_compute[274317]: 2026-02-01 10:05:13.465 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 185 KiB/s wr, 11 op/s Feb 1 05:05:14 localhost nova_compute[274317]: 2026-02-01 10:05:14.878 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 1 05:05:15 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:05:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 1 05:05:15 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7fe5699b-e701-4ec6-8f05-4a9b4e567acc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7fe5699b-e701-4ec6-8f05-4a9b4e567acc, vol_name:cephfs) < "" Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fe5699b-e701-4ec6-8f05-4a9b4e567acc/.meta.tmp' Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fe5699b-e701-4ec6-8f05-4a9b4e567acc/.meta.tmp' to config b'/volumes/_nogroup/7fe5699b-e701-4ec6-8f05-4a9b4e567acc/.meta' Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7fe5699b-e701-4ec6-8f05-4a9b4e567acc, vol_name:cephfs) < "" Feb 1 05:05:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7fe5699b-e701-4ec6-8f05-4a9b4e567acc", "format": "json"}]: dispatch Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7fe5699b-e701-4ec6-8f05-4a9b4e567acc, vol_name:cephfs) < "" Feb 1 05:05:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7fe5699b-e701-4ec6-8f05-4a9b4e567acc, vol_name:cephfs) < "" Feb 1 05:05:16 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:05:16 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 1 05:05:16 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 1 05:05:16 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 1 05:05:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 185 KiB/s wr, 11 op/s Feb 1 05:05:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:05:16 localhost podman[318105]: 2026-02-01 10:05:16.8676924 +0000 UTC m=+0.082952565 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 05:05:16 localhost podman[318105]: 2026-02-01 10:05:16.881796028 +0000 UTC m=+0.097056193 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:05:16 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:05:17 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e296 e296: 6 total, 6 up, 6 in Feb 1 05:05:18 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:05:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:05:18 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 1 05:05:18 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:18 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: Creating meta for ID bob with tenant 2b47af5b1cd441dab5c6c7ba6645e3a3 Feb 1 05:05:18 localhost nova_compute[274317]: 2026-02-01 10:05:18.505 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:18 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 345 B/s rd, 184 KiB/s wr, 10 op/s Feb 1 05:05:18 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} v 0) Feb 1 05:05:18 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:05:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:05:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:05:18 localhost podman[318125]: 2026-02-01 10:05:18.867238847 +0000 UTC m=+0.078837107 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 05:05:18 localhost podman[318125]: 2026-02-01 10:05:18.879705413 +0000 UTC m=+0.091303673 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 05:05:18 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:05:19 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6ea849be-f272-4004-a8b4-e7f86ba4f16e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:05:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6ea849be-f272-4004-a8b4-e7f86ba4f16e, vol_name:cephfs) < "" Feb 1 05:05:19 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6ea849be-f272-4004-a8b4-e7f86ba4f16e/.meta.tmp' Feb 1 05:05:19 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6ea849be-f272-4004-a8b4-e7f86ba4f16e/.meta.tmp' to config b'/volumes/_nogroup/6ea849be-f272-4004-a8b4-e7f86ba4f16e/.meta' Feb 1 05:05:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6ea849be-f272-4004-a8b4-e7f86ba4f16e, vol_name:cephfs) < "" Feb 1 05:05:19 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6ea849be-f272-4004-a8b4-e7f86ba4f16e", "format": "json"}]: dispatch Feb 1 05:05:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6ea849be-f272-4004-a8b4-e7f86ba4f16e, vol_name:cephfs) < "" Feb 1 05:05:19 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6ea849be-f272-4004-a8b4-e7f86ba4f16e, vol_name:cephfs) < "" Feb 1 05:05:19 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:05:19 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"} : dispatch Feb 1 05:05:19 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88", "mon", "allow r"], "format": "json"}]': finished Feb 1 05:05:19 localhost nova_compute[274317]: 2026-02-01 10:05:19.906 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 163 KiB/s wr, 9 op/s Feb 1 05:05:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_10:05:21 Feb 1 05:05:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 05:05:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 05:05:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['.mgr', 'manila_metadata', 'images', 'backups', 'vms', 'volumes', 'manila_data'] Feb 1 05:05:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 05:05:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:05:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:05:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:05:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:05:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:05:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003255208333333333 quantized to 32 (current 32) Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:05:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0025597278929369066 of space, bias 4.0, pg target 2.0375434027777777 quantized to 16 (current 16) Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:05:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:05:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 67 KiB/s wr, 5 op/s Feb 1 05:05:22 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee5830e4-c3f6-4299-9c44-15480a7cfa4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:05:22 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f/.meta.tmp' Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f/.meta.tmp' to config b'/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f/.meta' Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:23 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee5830e4-c3f6-4299-9c44-15480a7cfa4f", "format": "json"}]: dispatch Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:23 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6ea849be-f272-4004-a8b4-e7f86ba4f16e", "format": "json"}]: dispatch Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6ea849be-f272-4004-a8b4-e7f86ba4f16e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6ea849be-f272-4004-a8b4-e7f86ba4f16e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:23 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:23.191+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6ea849be-f272-4004-a8b4-e7f86ba4f16e' of type subvolume Feb 1 05:05:23 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6ea849be-f272-4004-a8b4-e7f86ba4f16e' of type subvolume Feb 1 05:05:23 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6ea849be-f272-4004-a8b4-e7f86ba4f16e", "force": true, "format": "json"}]: dispatch Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6ea849be-f272-4004-a8b4-e7f86ba4f16e, vol_name:cephfs) < "" Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6ea849be-f272-4004-a8b4-e7f86ba4f16e'' moved to trashcan Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:23 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6ea849be-f272-4004-a8b4-e7f86ba4f16e, vol_name:cephfs) < "" Feb 1 05:05:23 localhost nova_compute[274317]: 2026-02-01 10:05:23.507 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 124 KiB/s wr, 6 op/s Feb 1 05:05:24 localhost nova_compute[274317]: 2026-02-01 10:05:24.908 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:26 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7fe5699b-e701-4ec6-8f05-4a9b4e567acc", "format": "json"}]: dispatch Feb 1 05:05:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7fe5699b-e701-4ec6-8f05-4a9b4e567acc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7fe5699b-e701-4ec6-8f05-4a9b4e567acc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:26 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:26.491+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7fe5699b-e701-4ec6-8f05-4a9b4e567acc' of type subvolume Feb 1 05:05:26 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7fe5699b-e701-4ec6-8f05-4a9b4e567acc' of type subvolume Feb 1 05:05:26 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7fe5699b-e701-4ec6-8f05-4a9b4e567acc", "force": true, "format": "json"}]: dispatch Feb 1 05:05:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7fe5699b-e701-4ec6-8f05-4a9b4e567acc, vol_name:cephfs) < "" Feb 1 05:05:26 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7fe5699b-e701-4ec6-8f05-4a9b4e567acc'' moved to trashcan Feb 1 05:05:26 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7fe5699b-e701-4ec6-8f05-4a9b4e567acc, vol_name:cephfs) < "" Feb 1 05:05:26 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee5830e4-c3f6-4299-9c44-15480a7cfa4f", "auth_id": "bob", "tenant_id": "2b47af5b1cd441dab5c6c7ba6645e3a3", "access_level": "rw", "format": "json"}]: dispatch Feb 1 05:05:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:05:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 1 05:05:26 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 124 KiB/s wr, 6 op/s Feb 1 05:05:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37,allow rw path=/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f/96ec79df-7282-459b-9f45-01a3f66fbb7e", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88,allow rw pool=manila_data namespace=fsvolumens_ee5830e4-c3f6-4299-9c44-15480a7cfa4f"]} v 0) Feb 1 05:05:26 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37,allow rw path=/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f/96ec79df-7282-459b-9f45-01a3f66fbb7e", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88,allow rw pool=manila_data namespace=fsvolumens_ee5830e4-c3f6-4299-9c44-15480a7cfa4f"]} : dispatch Feb 1 05:05:26 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 1 05:05:26 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:26 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, tenant_id:2b47af5b1cd441dab5c6c7ba6645e3a3, vol_name:cephfs) < "" Feb 1 05:05:27 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:27 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37,allow rw path=/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f/96ec79df-7282-459b-9f45-01a3f66fbb7e", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88,allow rw pool=manila_data namespace=fsvolumens_ee5830e4-c3f6-4299-9c44-15480a7cfa4f"]} : dispatch Feb 1 05:05:27 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37,allow rw path=/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f/96ec79df-7282-459b-9f45-01a3f66fbb7e", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88,allow rw pool=manila_data namespace=fsvolumens_ee5830e4-c3f6-4299-9c44-15480a7cfa4f"]} : dispatch Feb 1 05:05:27 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37,allow rw path=/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f/96ec79df-7282-459b-9f45-01a3f66fbb7e", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88,allow rw pool=manila_data namespace=fsvolumens_ee5830e4-c3f6-4299-9c44-15480a7cfa4f"]}]': finished Feb 1 05:05:27 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:28 localhost nova_compute[274317]: 2026-02-01 10:05:28.521 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 268 B/s rd, 185 KiB/s wr, 8 op/s Feb 1 05:05:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee5830e4-c3f6-4299-9c44-15480a7cfa4f", "auth_id": "bob", "format": "json"}]: dispatch Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 1 05:05:29 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:29 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88"]} v 0) Feb 1 05:05:29 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88"]} : dispatch Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee5830e4-c3f6-4299-9c44-15480a7cfa4f", "auth_id": "bob", "format": "json"}]: dispatch Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f/96ec79df-7282-459b-9f45-01a3f66fbb7e Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1281be2a-d93e-437b-9b63-ac349c9cd9d6", "format": "json"}]: dispatch Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1281be2a-d93e-437b-9b63-ac349c9cd9d6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1281be2a-d93e-437b-9b63-ac349c9cd9d6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:29 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:29.871+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1281be2a-d93e-437b-9b63-ac349c9cd9d6' of type subvolume Feb 1 05:05:29 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1281be2a-d93e-437b-9b63-ac349c9cd9d6' of type subvolume Feb 1 05:05:29 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1281be2a-d93e-437b-9b63-ac349c9cd9d6", "force": true, "format": "json"}]: dispatch Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1281be2a-d93e-437b-9b63-ac349c9cd9d6, vol_name:cephfs) < "" Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1281be2a-d93e-437b-9b63-ac349c9cd9d6'' moved to trashcan Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:29 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1281be2a-d93e-437b-9b63-ac349c9cd9d6, vol_name:cephfs) < "" Feb 1 05:05:29 localhost nova_compute[274317]: 2026-02-01 10:05:29.954 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:30 localhost podman[236852]: time="2026-02-01T10:05:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:05:30 localhost podman[236852]: @ - - [01/Feb/2026:10:05:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:05:30 localhost podman[236852]: @ - - [01/Feb/2026:10:05:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18344 "" "Go-http-client/1.1" Feb 1 05:05:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 132 KiB/s wr, 6 op/s Feb 1 05:05:30 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88"]} : dispatch Feb 1 05:05:30 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:30 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88"]} : dispatch Feb 1 05:05:30 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37", "osd", "allow rw pool=manila_data namespace=fsvolumens_1c2f0941-aab0-42d0-937e-94c942e5fb88"]}]': finished Feb 1 05:05:31 localhost openstack_network_exporter[239388]: ERROR 10:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:05:31 localhost openstack_network_exporter[239388]: Feb 1 05:05:31 localhost openstack_network_exporter[239388]: ERROR 10:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:05:31 localhost openstack_network_exporter[239388]: Feb 1 05:05:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 190 KiB/s wr, 9 op/s Feb 1 05:05:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "40dc17a8-88e2-443b-88e0-b305a0120dd3", "format": "json"}]: dispatch Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:40dc17a8-88e2-443b-88e0-b305a0120dd3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:40dc17a8-88e2-443b-88e0-b305a0120dd3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:33 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '40dc17a8-88e2-443b-88e0-b305a0120dd3' of type subvolume Feb 1 05:05:33 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:33.062+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '40dc17a8-88e2-443b-88e0-b305a0120dd3' of type subvolume Feb 1 05:05:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "40dc17a8-88e2-443b-88e0-b305a0120dd3", "force": true, "format": "json"}]: dispatch Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:40dc17a8-88e2-443b-88e0-b305a0120dd3, vol_name:cephfs) < "" Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/40dc17a8-88e2-443b-88e0-b305a0120dd3'' moved to trashcan Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:40dc17a8-88e2-443b-88e0-b305a0120dd3, vol_name:cephfs) < "" Feb 1 05:05:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "bob", "format": "json"}]: dispatch Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:33 localhost nova_compute[274317]: 2026-02-01 10:05:33.567 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 1 05:05:33 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:33 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Feb 1 05:05:33 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:33 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "auth_id": "bob", "format": "json"}]: dispatch Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88/53cdb057-d7f8-43f2-812c-305c99393a37 Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 1 05:05:33 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:34 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Feb 1 05:05:34 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 1 05:05:34 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Feb 1 05:05:34 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Feb 1 05:05:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 189 KiB/s wr, 8 op/s Feb 1 05:05:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:05:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2584492261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:05:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:05:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2584492261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:05:34 localhost nova_compute[274317]: 2026-02-01 10:05:34.957 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 130 KiB/s wr, 6 op/s Feb 1 05:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:05:36 localhost systemd[1]: tmp-crun.uPZfYD.mount: Deactivated successfully. Feb 1 05:05:36 localhost podman[318153]: 2026-02-01 10:05:36.829372763 +0000 UTC m=+0.080993554 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 05:05:36 localhost podman[318153]: 2026-02-01 10:05:36.840784268 +0000 UTC m=+0.092405089 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 05:05:36 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:05:36 localhost podman[318150]: 2026-02-01 10:05:36.891305685 +0000 UTC m=+0.147815878 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, vcs-type=git, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter) Feb 1 05:05:36 localhost podman[318151]: 2026-02-01 10:05:36.937519499 +0000 UTC m=+0.193319290 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Feb 1 05:05:36 localhost podman[318150]: 2026-02-01 10:05:36.956255251 +0000 UTC m=+0.212765444 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, release=1769056855, io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_id=openstack_network_exporter, managed_by=edpm_ansible) Feb 1 05:05:36 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:05:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee5830e4-c3f6-4299-9c44-15480a7cfa4f", "format": "json"}]: dispatch Feb 1 05:05:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:36 localhost podman[318151]: 2026-02-01 10:05:36.972643429 +0000 UTC m=+0.228443180 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent) Feb 1 05:05:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:36 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee5830e4-c3f6-4299-9c44-15480a7cfa4f' of type subvolume Feb 1 05:05:36 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:36.977+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee5830e4-c3f6-4299-9c44-15480a7cfa4f' of type subvolume Feb 1 05:05:36 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee5830e4-c3f6-4299-9c44-15480a7cfa4f", "force": true, "format": "json"}]: dispatch Feb 1 05:05:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:36 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee5830e4-c3f6-4299-9c44-15480a7cfa4f'' moved to trashcan Feb 1 05:05:36 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:36 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee5830e4-c3f6-4299-9c44-15480a7cfa4f, vol_name:cephfs) < "" Feb 1 05:05:36 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:05:37 localhost podman[318152]: 2026-02-01 10:05:37.049155423 +0000 UTC m=+0.300032591 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Feb 1 05:05:37 localhost podman[318152]: 2026-02-01 10:05:37.088673659 +0000 UTC m=+0.339550827 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:05:37 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:05:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 192 KiB/s wr, 10 op/s Feb 1 05:05:38 localhost nova_compute[274317]: 2026-02-01 10:05:38.602 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:38 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:05:38.832 259225 INFO neutron.agent.linux.ip_lib [None req-21546086-5e68-41c2-95df-a8701f6241d6 - - - - - -] Device tapd2a0f71b-44 cannot be used as it has no MAC address#033[00m Feb 1 05:05:38 localhost nova_compute[274317]: 2026-02-01 10:05:38.852 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:38 localhost kernel: device tapd2a0f71b-44 entered promiscuous mode Feb 1 05:05:38 localhost nova_compute[274317]: 2026-02-01 10:05:38.863 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:38 localhost ovn_controller[152787]: 2026-02-01T10:05:38Z|00264|binding|INFO|Claiming lport d2a0f71b-4467-42be-891d-ec006aa4434a for this chassis. Feb 1 05:05:38 localhost ovn_controller[152787]: 2026-02-01T10:05:38Z|00265|binding|INFO|d2a0f71b-4467-42be-891d-ec006aa4434a: Claiming unknown Feb 1 05:05:38 localhost NetworkManager[5972]: [1769940338.8670] manager: (tapd2a0f71b-44): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Feb 1 05:05:38 localhost systemd-udevd[318244]: Network interface NamePolicy= disabled on kernel command line. Feb 1 05:05:38 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:38.878 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-33acccbf-fdce-4047-a62f-897349b76d78', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33acccbf-fdce-4047-a62f-897349b76d78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6ca417b497f4e6882e6d3909dae11b9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efaebd7-2ff5-4535-bb80-29930fba1adb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d2a0f71b-4467-42be-891d-ec006aa4434a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:05:38 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:38.879 158655 INFO neutron.agent.ovn.metadata.agent [-] Port d2a0f71b-4467-42be-891d-ec006aa4434a in datapath 33acccbf-fdce-4047-a62f-897349b76d78 bound to our chassis#033[00m Feb 1 05:05:38 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:38.880 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Port 874ad3b7-ea03-438f-9071-2da637345e05 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 1 05:05:38 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:38.880 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 33acccbf-fdce-4047-a62f-897349b76d78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 05:05:38 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:38.881 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0cb76b-8d52-4e49-907f-b00602a701f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 05:05:38 localhost journal[224955]: ethtool ioctl error on tapd2a0f71b-44: No such device Feb 1 05:05:38 localhost ovn_controller[152787]: 2026-02-01T10:05:38Z|00266|binding|INFO|Setting lport d2a0f71b-4467-42be-891d-ec006aa4434a ovn-installed in OVS Feb 1 05:05:38 localhost ovn_controller[152787]: 2026-02-01T10:05:38Z|00267|binding|INFO|Setting lport d2a0f71b-4467-42be-891d-ec006aa4434a up in Southbound Feb 1 05:05:38 localhost nova_compute[274317]: 2026-02-01 10:05:38.902 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:38 localhost journal[224955]: ethtool ioctl error on tapd2a0f71b-44: No such device Feb 1 05:05:38 localhost journal[224955]: ethtool ioctl error on tapd2a0f71b-44: No such device Feb 1 05:05:38 localhost journal[224955]: ethtool ioctl error on tapd2a0f71b-44: No such device Feb 1 05:05:38 localhost journal[224955]: ethtool ioctl error on tapd2a0f71b-44: No such device Feb 1 05:05:38 localhost journal[224955]: ethtool ioctl error on tapd2a0f71b-44: No such device Feb 1 05:05:38 localhost journal[224955]: ethtool ioctl error on tapd2a0f71b-44: No such device Feb 1 05:05:38 localhost journal[224955]: ethtool ioctl error on tapd2a0f71b-44: No such device Feb 1 05:05:38 localhost nova_compute[274317]: 2026-02-01 10:05:38.940 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:38 localhost nova_compute[274317]: 2026-02-01 10:05:38.968 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:39 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e297 e297: 6 total, 6 up, 6 in Feb 1 05:05:39 localhost podman[318315]: Feb 1 05:05:39 localhost podman[318315]: 2026-02-01 10:05:39.870431097 +0000 UTC m=+0.089301532 container create 4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33acccbf-fdce-4047-a62f-897349b76d78, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 1 05:05:39 localhost systemd[1]: Started libpod-conmon-4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314.scope. Feb 1 05:05:39 localhost podman[318315]: 2026-02-01 10:05:39.825481783 +0000 UTC m=+0.044352268 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 1 05:05:39 localhost systemd[1]: Started libcrun container. Feb 1 05:05:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09f618ddecd5f6752fa51b48a7ea5a7239cb1ba93f57b916b2d542bccd407c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 1 05:05:39 localhost podman[318315]: 2026-02-01 10:05:39.949003915 +0000 UTC m=+0.167874360 container init 4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33acccbf-fdce-4047-a62f-897349b76d78, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 05:05:39 localhost dnsmasq[318334]: started, version 2.85 cachesize 150 Feb 1 05:05:39 localhost dnsmasq[318334]: DNS service limited to local subnets Feb 1 05:05:39 localhost dnsmasq[318334]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 1 05:05:39 localhost dnsmasq[318334]: warning: no upstream servers configured Feb 1 05:05:39 localhost dnsmasq-dhcp[318334]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 1 05:05:39 localhost dnsmasq[318334]: read /var/lib/neutron/dhcp/33acccbf-fdce-4047-a62f-897349b76d78/addn_hosts - 0 addresses Feb 1 05:05:39 localhost dnsmasq-dhcp[318334]: read /var/lib/neutron/dhcp/33acccbf-fdce-4047-a62f-897349b76d78/host Feb 1 05:05:39 localhost dnsmasq-dhcp[318334]: read /var/lib/neutron/dhcp/33acccbf-fdce-4047-a62f-897349b76d78/opts Feb 1 05:05:39 localhost nova_compute[274317]: 2026-02-01 10:05:39.984 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:39 localhost podman[318315]: 2026-02-01 10:05:39.986602223 +0000 UTC m=+0.205472668 container start 4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33acccbf-fdce-4047-a62f-897349b76d78, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 1 05:05:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e298 e298: 6 total, 6 up, 6 in Feb 1 05:05:40 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:05:40.230 259225 INFO neutron.agent.dhcp.agent [None req-42a447f9-d25e-479d-8ce2-2b8bd0236d90 - - - - - -] DHCP configuration for ports {'9614388b-eb52-4cd9-ad36-6e1d8e7f685b'} is completed#033[00m Feb 1 05:05:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "format": "json"}]: dispatch Feb 1 05:05:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:40 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:40.318+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1c2f0941-aab0-42d0-937e-94c942e5fb88' of type subvolume Feb 1 05:05:40 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1c2f0941-aab0-42d0-937e-94c942e5fb88' of type subvolume Feb 1 05:05:40 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1c2f0941-aab0-42d0-937e-94c942e5fb88", "force": true, "format": "json"}]: dispatch Feb 1 05:05:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1c2f0941-aab0-42d0-937e-94c942e5fb88'' moved to trashcan Feb 1 05:05:40 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:40 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1c2f0941-aab0-42d0-937e-94c942e5fb88, vol_name:cephfs) < "" Feb 1 05:05:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 92 KiB/s wr, 5 op/s Feb 1 05:05:40 localhost systemd[1]: tmp-crun.c3EJHh.mount: Deactivated successfully. Feb 1 05:05:41 localhost dnsmasq[318334]: exiting on receipt of SIGTERM Feb 1 05:05:41 localhost systemd[1]: libpod-4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314.scope: Deactivated successfully. Feb 1 05:05:41 localhost podman[318350]: 2026-02-01 10:05:41.333140236 +0000 UTC m=+0.067541927 container kill 4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33acccbf-fdce-4047-a62f-897349b76d78, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:05:41 localhost ovn_controller[152787]: 2026-02-01T10:05:41Z|00268|binding|INFO|Removing iface tapd2a0f71b-44 ovn-installed in OVS Feb 1 05:05:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:41.363 158655 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 874ad3b7-ea03-438f-9071-2da637345e05 with type ""#033[00m Feb 1 05:05:41 localhost ovn_controller[152787]: 2026-02-01T10:05:41Z|00269|binding|INFO|Removing lport d2a0f71b-4467-42be-891d-ec006aa4434a ovn-installed in OVS Feb 1 05:05:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:41.365 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005604215.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcpd3c7262e-bf25-53c6-bfa9-f11e8686eb9b-33acccbf-fdce-4047-a62f-897349b76d78', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33acccbf-fdce-4047-a62f-897349b76d78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6ca417b497f4e6882e6d3909dae11b9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005604215.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efaebd7-2ff5-4535-bb80-29930fba1adb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d2a0f71b-4467-42be-891d-ec006aa4434a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:05:41 localhost nova_compute[274317]: 2026-02-01 10:05:41.367 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:41.369 158655 INFO neutron.agent.ovn.metadata.agent [-] Port d2a0f71b-4467-42be-891d-ec006aa4434a in datapath 33acccbf-fdce-4047-a62f-897349b76d78 unbound from our chassis#033[00m Feb 1 05:05:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:41.373 158655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 33acccbf-fdce-4047-a62f-897349b76d78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 1 05:05:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:41.374 303130 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ef3f44-1af5-46da-b151-9346c1f1aa9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 1 05:05:41 localhost podman[318363]: 2026-02-01 10:05:41.40546254 +0000 UTC m=+0.059357023 container died 4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33acccbf-fdce-4047-a62f-897349b76d78, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:05:41 localhost podman[318363]: 2026-02-01 10:05:41.561389809 +0000 UTC m=+0.215284232 container cleanup 4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33acccbf-fdce-4047-a62f-897349b76d78, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 1 05:05:41 localhost systemd[1]: libpod-conmon-4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314.scope: Deactivated successfully. Feb 1 05:05:41 localhost podman[318370]: 2026-02-01 10:05:41.604410604 +0000 UTC m=+0.247765869 container remove 4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33acccbf-fdce-4047-a62f-897349b76d78, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 1 05:05:41 localhost nova_compute[274317]: 2026-02-01 10:05:41.615 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:41 localhost kernel: device tapd2a0f71b-44 left promiscuous mode Feb 1 05:05:41 localhost nova_compute[274317]: 2026-02-01 10:05:41.633 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:41 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:05:41.653 259225 INFO neutron.agent.dhcp.agent [None req-13ccfa7a-2ea3-4c70-afdb-0f7b98556939 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 05:05:41 localhost neutron_dhcp_agent[259221]: 2026-02-01 10:05:41.653 259225 INFO neutron.agent.dhcp.agent [None req-13ccfa7a-2ea3-4c70-afdb-0f7b98556939 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 1 05:05:41 localhost nova_compute[274317]: 2026-02-01 10:05:41.676 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:41.783 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:05:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:41.784 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:05:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:05:41.784 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:05:41 localhost systemd[1]: var-lib-containers-storage-overlay-c09f618ddecd5f6752fa51b48a7ea5a7239cb1ba93f57b916b2d542bccd407c1-merged.mount: Deactivated successfully. Feb 1 05:05:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b4ed587ee645ca6a596ea803f2a53c3fea9006817704916ebc64addea7f4314-userdata-shm.mount: Deactivated successfully. Feb 1 05:05:41 localhost systemd[1]: run-netns-qdhcp\x2d33acccbf\x2dfdce\x2d4047\x2da62f\x2d897349b76d78.mount: Deactivated successfully. Feb 1 05:05:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 2.7 MiB/s wr, 62 op/s Feb 1 05:05:43 localhost nova_compute[274317]: 2026-02-01 10:05:43.629 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 2.7 MiB/s wr, 62 op/s Feb 1 05:05:44 localhost nova_compute[274317]: 2026-02-01 10:05:44.986 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:45 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "29ed0300-6f83-4d5e-934d-f9bed65972ad", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:05:45 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29ed0300-6f83-4d5e-934d-f9bed65972ad, vol_name:cephfs) < "" Feb 1 05:05:46 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/29ed0300-6f83-4d5e-934d-f9bed65972ad/.meta.tmp' Feb 1 05:05:46 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/29ed0300-6f83-4d5e-934d-f9bed65972ad/.meta.tmp' to config b'/volumes/_nogroup/29ed0300-6f83-4d5e-934d-f9bed65972ad/.meta' Feb 1 05:05:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29ed0300-6f83-4d5e-934d-f9bed65972ad, vol_name:cephfs) < "" Feb 1 05:05:46 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "29ed0300-6f83-4d5e-934d-f9bed65972ad", "format": "json"}]: dispatch Feb 1 05:05:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29ed0300-6f83-4d5e-934d-f9bed65972ad, vol_name:cephfs) < "" Feb 1 05:05:46 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29ed0300-6f83-4d5e-934d-f9bed65972ad, vol_name:cephfs) < "" Feb 1 05:05:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 56 op/s Feb 1 05:05:47 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 e299: 6 total, 6 up, 6 in Feb 1 05:05:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:05:47 localhost podman[318399]: 2026-02-01 10:05:47.865335271 +0000 UTC m=+0.079202968 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute) Feb 1 05:05:47 localhost podman[318399]: 2026-02-01 10:05:47.880959447 +0000 UTC m=+0.094827124 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Feb 1 05:05:47 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:05:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.5 MiB/s wr, 56 op/s Feb 1 05:05:48 localhost nova_compute[274317]: 2026-02-01 10:05:48.668 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:49 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "065a1ec8-780b-4355-aa78-092fe74d1d95", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:05:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:065a1ec8-780b-4355-aa78-092fe74d1d95, vol_name:cephfs) < "" Feb 1 05:05:49 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/065a1ec8-780b-4355-aa78-092fe74d1d95/.meta.tmp' Feb 1 05:05:49 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/065a1ec8-780b-4355-aa78-092fe74d1d95/.meta.tmp' to config b'/volumes/_nogroup/065a1ec8-780b-4355-aa78-092fe74d1d95/.meta' Feb 1 05:05:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:065a1ec8-780b-4355-aa78-092fe74d1d95, vol_name:cephfs) < "" Feb 1 05:05:49 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "065a1ec8-780b-4355-aa78-092fe74d1d95", "format": "json"}]: dispatch Feb 1 05:05:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:065a1ec8-780b-4355-aa78-092fe74d1d95, vol_name:cephfs) < "" Feb 1 05:05:49 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:065a1ec8-780b-4355-aa78-092fe74d1d95, vol_name:cephfs) < "" Feb 1 05:05:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:05:49 localhost podman[318438]: 2026-02-01 10:05:49.675688047 +0000 UTC m=+0.078733124 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:05:49 localhost podman[318438]: 2026-02-01 10:05:49.68576494 +0000 UTC m=+0.088810017 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:05:49 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:05:49 localhost nova_compute[274317]: 2026-02-01 10:05:49.991 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain.devices.0}] v 0) Feb 1 05:05:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604213.localdomain}] v 0) Feb 1 05:05:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain.devices.0}] v 0) Feb 1 05:05:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain.devices.0}] v 0) Feb 1 05:05:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604215.localdomain}] v 0) Feb 1 05:05:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005604212.localdomain}] v 0) Feb 1 05:05:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:50 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:05:50 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:05:50 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:05:50 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:05:50 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:05:50 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:05:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 47 op/s Feb 1 05:05:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 05:05:51 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 05:05:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 05:05:51 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:05:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 05:05:51 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev c95f1f35-5299-4082-b400-b5fd30318b3b (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:05:51 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev c95f1f35-5299-4082-b400-b5fd30318b3b (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:05:51 localhost ceph-mgr[278126]: [progress INFO root] Completed event c95f1f35-5299-4082-b400-b5fd30318b3b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 05:05:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 05:05:51 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 05:05:51 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:05:51 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:05:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:05:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:05:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:05:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:05:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:05:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:05:51 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 05:05:51 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 05:05:52 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "065a1ec8-780b-4355-aa78-092fe74d1d95", "format": "json"}]: dispatch Feb 1 05:05:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:065a1ec8-780b-4355-aa78-092fe74d1d95, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:065a1ec8-780b-4355-aa78-092fe74d1d95, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:52 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:52.545+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '065a1ec8-780b-4355-aa78-092fe74d1d95' of type subvolume Feb 1 05:05:52 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '065a1ec8-780b-4355-aa78-092fe74d1d95' of type subvolume Feb 1 05:05:52 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "065a1ec8-780b-4355-aa78-092fe74d1d95", "force": true, "format": "json"}]: dispatch Feb 1 05:05:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:065a1ec8-780b-4355-aa78-092fe74d1d95, vol_name:cephfs) < "" Feb 1 05:05:52 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/065a1ec8-780b-4355-aa78-092fe74d1d95'' moved to trashcan Feb 1 05:05:52 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:52 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:065a1ec8-780b-4355-aa78-092fe74d1d95, vol_name:cephfs) < "" Feb 1 05:05:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s Feb 1 05:05:52 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:52.912910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940352912952, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1493, "num_deletes": 253, "total_data_size": 1788539, "memory_usage": 1823576, "flush_reason": "Manual Compaction"} Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940352923347, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 1146304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32953, "largest_seqno": 34441, "table_properties": {"data_size": 1140040, "index_size": 3350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15788, "raw_average_key_size": 21, "raw_value_size": 1126669, "raw_average_value_size": 1560, "num_data_blocks": 141, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769940287, "oldest_key_time": 1769940287, "file_creation_time": 1769940352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 10484 microseconds, and 4446 cpu microseconds. Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:52.923393) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 1146304 bytes OK Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:52.923417) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:52.926040) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:52.926061) EVENT_LOG_v1 {"time_micros": 1769940352926055, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:52.926083) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1781147, prev total WAL file size 1781147, number of live WAL files 2. Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:52.926956) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(1119KB)], [51(21MB)] Feb 1 05:05:52 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940352927038, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 23967731, "oldest_snapshot_seqno": -1} Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 14587 keys, 22436462 bytes, temperature: kUnknown Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940353051459, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 22436462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 22352342, "index_size": 46579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36485, "raw_key_size": 392401, "raw_average_key_size": 26, "raw_value_size": 22103756, "raw_average_value_size": 1515, "num_data_blocks": 1727, "num_entries": 14587, "num_filter_entries": 14587, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769939270, "oldest_key_time": 0, "file_creation_time": 1769940352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c098c70d-588d-409e-9f3c-16c3b4da1135", "db_session_id": "HRI08R8OB38WGRLS0V9F", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:53.051959) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 22436462 bytes Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:53.054972) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.3 rd, 180.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 21.8 +0.0 blob) out(21.4 +0.0 blob), read-write-amplify(40.5) write-amplify(19.6) OK, records in: 15123, records dropped: 536 output_compression: NoCompression Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:53.055003) EVENT_LOG_v1 {"time_micros": 1769940353054990, "job": 30, "event": "compaction_finished", "compaction_time_micros": 124668, "compaction_time_cpu_micros": 58286, "output_level": 6, "num_output_files": 1, "total_output_size": 22436462, "num_input_records": 15123, "num_output_records": 14587, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940353055812, "job": 30, "event": "table_file_deletion", "file_number": 53} Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005604215/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769940353059360, "job": 30, "event": "table_file_deletion", "file_number": 51} Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:52.926809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:53.059523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:53.059532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:53.059535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:53.059538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:05:53 localhost ceph-mon[298604]: rocksdb: (Original Log Time 2026/02/01-10:05:53.059542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 1 05:05:53 localhost nova_compute[274317]: 2026-02-01 10:05:53.710 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s Feb 1 05:05:54 localhost nova_compute[274317]: 2026-02-01 10:05:54.992 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:05:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "29ed0300-6f83-4d5e-934d-f9bed65972ad", "format": "json"}]: dispatch Feb 1 05:05:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:29ed0300-6f83-4d5e-934d-f9bed65972ad, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:29ed0300-6f83-4d5e-934d-f9bed65972ad, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:05:55 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:05:55.766+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29ed0300-6f83-4d5e-934d-f9bed65972ad' of type subvolume Feb 1 05:05:55 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29ed0300-6f83-4d5e-934d-f9bed65972ad' of type subvolume Feb 1 05:05:55 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "29ed0300-6f83-4d5e-934d-f9bed65972ad", "force": true, "format": "json"}]: dispatch Feb 1 05:05:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29ed0300-6f83-4d5e-934d-f9bed65972ad, vol_name:cephfs) < "" Feb 1 05:05:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/29ed0300-6f83-4d5e-934d-f9bed65972ad'' moved to trashcan Feb 1 05:05:55 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:05:55 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29ed0300-6f83-4d5e-934d-f9bed65972ad, vol_name:cephfs) < "" Feb 1 05:05:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s Feb 1 05:05:58 localhost nova_compute[274317]: 2026-02-01 10:05:58.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:05:58 localhost nova_compute[274317]: 2026-02-01 10:05:58.101 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 05:05:58 localhost nova_compute[274317]: 2026-02-01 10:05:58.102 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 05:05:58 localhost nova_compute[274317]: 2026-02-01 10:05:58.120 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 05:05:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 358 B/s rd, 98 KiB/s wr, 4 op/s Feb 1 05:05:58 localhost nova_compute[274317]: 2026-02-01 10:05:58.747 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:05:59 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "ac2fcae5-84f4-4aa2-b0db-8c8658f5c9d2", "mode": "0755", "format": "json"}]: dispatch Feb 1 05:05:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:ac2fcae5-84f4-4aa2-b0db-8c8658f5c9d2, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 1 05:05:59 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:ac2fcae5-84f4-4aa2-b0db-8c8658f5c9d2, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 1 05:05:59 localhost nova_compute[274317]: 2026-02-01 10:05:59.116 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:00 localhost nova_compute[274317]: 2026-02-01 10:06:00.027 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:00 localhost podman[236852]: time="2026-02-01T10:06:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:06:00 localhost podman[236852]: @ - - [01/Feb/2026:10:06:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:06:00 localhost podman[236852]: @ - - [01/Feb/2026:10:06:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18348 "" "Go-http-client/1.1" Feb 1 05:06:00 localhost nova_compute[274317]: 2026-02-01 10:06:00.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 80 KiB/s wr, 2 op/s Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.119 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.119 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.120 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.120 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.120 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:06:01 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:06:01 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1457138988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:06:01 localhost openstack_network_exporter[239388]: ERROR 10:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:06:01 localhost openstack_network_exporter[239388]: Feb 1 05:06:01 localhost openstack_network_exporter[239388]: ERROR 10:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:06:01 localhost openstack_network_exporter[239388]: Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.580 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.792 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.794 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11481MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.794 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.795 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.860 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.861 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 05:06:01 localhost nova_compute[274317]: 2026-02-01 10:06:01.982 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:06:02 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "ac2fcae5-84f4-4aa2-b0db-8c8658f5c9d2", "force": true, "format": "json"}]: dispatch Feb 1 05:06:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:ac2fcae5-84f4-4aa2-b0db-8c8658f5c9d2, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 1 05:06:02 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:ac2fcae5-84f4-4aa2-b0db-8c8658f5c9d2, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 1 05:06:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:06:02 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/758051834' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:06:02 localhost nova_compute[274317]: 2026-02-01 10:06:02.506 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:06:02 localhost nova_compute[274317]: 2026-02-01 10:06:02.512 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 05:06:02 localhost nova_compute[274317]: 2026-02-01 10:06:02.533 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 05:06:02 localhost nova_compute[274317]: 2026-02-01 10:06:02.535 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 05:06:02 localhost nova_compute[274317]: 2026-02-01 10:06:02.535 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:06:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 100 KiB/s wr, 3 op/s Feb 1 05:06:03 localhost nova_compute[274317]: 2026-02-01 10:06:03.802 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:04 localhost nova_compute[274317]: 2026-02-01 10:06:04.535 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:04 localhost nova_compute[274317]: 2026-02-01 10:06:04.536 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:04 localhost nova_compute[274317]: 2026-02-01 10:06:04.536 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:04 localhost nova_compute[274317]: 2026-02-01 10:06:04.536 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 05:06:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 68 KiB/s wr, 2 op/s Feb 1 05:06:05 localhost nova_compute[274317]: 2026-02-01 10:06:05.034 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:05 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a4d9ba32-76b0-4058-9528-d3cb71806502", "mode": "0755", "format": "json"}]: dispatch Feb 1 05:06:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:a4d9ba32-76b0-4058-9528-d3cb71806502, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 1 05:06:05 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:a4d9ba32-76b0-4058-9528-d3cb71806502, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 1 05:06:06 localhost nova_compute[274317]: 2026-02-01 10:06:06.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 68 KiB/s wr, 2 op/s Feb 1 05:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:06:07 localhost systemd[1]: tmp-crun.pQQdLQ.mount: Deactivated successfully. Feb 1 05:06:07 localhost podman[318635]: 2026-02-01 10:06:07.892660079 +0000 UTC m=+0.098236129 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 05:06:07 localhost podman[318631]: 2026-02-01 10:06:07.977134251 +0000 UTC m=+0.185549379 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Feb 1 05:06:07 localhost podman[318635]: 2026-02-01 10:06:07.999772283 +0000 UTC m=+0.205348343 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:06:08 localhost podman[318630]: 2026-02-01 10:06:07.853517845 +0000 UTC m=+0.069324292 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, release=1769056855, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=) Feb 1 05:06:08 localhost podman[318631]: 2026-02-01 10:06:08.00964283 +0000 UTC m=+0.218057938 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:06:08 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:06:08 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:06:08 localhost podman[318630]: 2026-02-01 10:06:08.090734536 +0000 UTC m=+0.306540993 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, container_name=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 05:06:08 localhost podman[318638]: 2026-02-01 10:06:08.098474456 +0000 UTC m=+0.297821413 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 1 05:06:08 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:06:08 localhost podman[318638]: 2026-02-01 10:06:08.135629689 +0000 UTC m=+0.334976636 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 05:06:08 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:06:08 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a4d9ba32-76b0-4058-9528-d3cb71806502", "force": true, "format": "json"}]: dispatch Feb 1 05:06:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a4d9ba32-76b0-4058-9528-d3cb71806502, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 1 05:06:08 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a4d9ba32-76b0-4058-9528-d3cb71806502, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 1 05:06:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 84 KiB/s wr, 3 op/s Feb 1 05:06:08 localhost nova_compute[274317]: 2026-02-01 10:06:08.804 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:09 localhost nova_compute[274317]: 2026-02-01 10:06:09.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:10 localhost nova_compute[274317]: 2026-02-01 10:06:10.068 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 36 KiB/s wr, 1 op/s Feb 1 05:06:11 localhost nova_compute[274317]: 2026-02-01 10:06:11.096 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:11 localhost ovn_metadata_agent[158650]: 2026-02-01 10:06:11.786 158655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '62:f5:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:c8:7b:0d:61:da'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 1 05:06:11 localhost nova_compute[274317]: 2026-02-01 10:06:11.787 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:11 localhost ovn_metadata_agent[158650]: 2026-02-01 10:06:11.788 158655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 1 05:06:11 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "35634281-ee3f-4e5c-8cf1-1c19c4d823ce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:06:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:35634281-ee3f-4e5c-8cf1-1c19c4d823ce, vol_name:cephfs) < "" Feb 1 05:06:11 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/35634281-ee3f-4e5c-8cf1-1c19c4d823ce/.meta.tmp' Feb 1 05:06:11 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/35634281-ee3f-4e5c-8cf1-1c19c4d823ce/.meta.tmp' to config b'/volumes/_nogroup/35634281-ee3f-4e5c-8cf1-1c19c4d823ce/.meta' Feb 1 05:06:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:35634281-ee3f-4e5c-8cf1-1c19c4d823ce, vol_name:cephfs) < "" Feb 1 05:06:11 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "35634281-ee3f-4e5c-8cf1-1c19c4d823ce", "format": "json"}]: dispatch Feb 1 05:06:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:35634281-ee3f-4e5c-8cf1-1c19c4d823ce, vol_name:cephfs) < "" Feb 1 05:06:11 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:35634281-ee3f-4e5c-8cf1-1c19c4d823ce, vol_name:cephfs) < "" Feb 1 05:06:12 localhost nova_compute[274317]: 2026-02-01 10:06:12.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:12 localhost nova_compute[274317]: 2026-02-01 10:06:12.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 1 05:06:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 53 KiB/s wr, 2 op/s Feb 1 05:06:13 localhost nova_compute[274317]: 2026-02-01 10:06:13.838 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 33 KiB/s wr, 1 op/s Feb 1 05:06:15 localhost nova_compute[274317]: 2026-02-01 10:06:15.069 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "35634281-ee3f-4e5c-8cf1-1c19c4d823ce", "format": "json"}]: dispatch Feb 1 05:06:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:35634281-ee3f-4e5c-8cf1-1c19c4d823ce, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:06:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:35634281-ee3f-4e5c-8cf1-1c19c4d823ce, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:06:15 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:06:15.242+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '35634281-ee3f-4e5c-8cf1-1c19c4d823ce' of type subvolume Feb 1 05:06:15 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '35634281-ee3f-4e5c-8cf1-1c19c4d823ce' of type subvolume Feb 1 05:06:15 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "35634281-ee3f-4e5c-8cf1-1c19c4d823ce", "force": true, "format": "json"}]: dispatch Feb 1 05:06:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:35634281-ee3f-4e5c-8cf1-1c19c4d823ce, vol_name:cephfs) < "" Feb 1 05:06:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/35634281-ee3f-4e5c-8cf1-1c19c4d823ce'' moved to trashcan Feb 1 05:06:15 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:06:15 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:35634281-ee3f-4e5c-8cf1-1c19c4d823ce, vol_name:cephfs) < "" Feb 1 05:06:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 33 KiB/s wr, 1 op/s Feb 1 05:06:17 localhost ovn_metadata_agent[158650]: 2026-02-01 10:06:17.791 158655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f18e6148-4a7e-452d-80cb-72c86b59e439, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 1 05:06:17 localhost ovn_controller[152787]: 2026-02-01T10:06:17Z|00270|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Feb 1 05:06:18 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 1 05:06:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb, vol_name:cephfs) < "" Feb 1 05:06:18 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb/.meta.tmp' Feb 1 05:06:18 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb/.meta.tmp' to config b'/volumes/_nogroup/5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb/.meta' Feb 1 05:06:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb, vol_name:cephfs) < "" Feb 1 05:06:18 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb", "format": "json"}]: dispatch Feb 1 05:06:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb, vol_name:cephfs) < "" Feb 1 05:06:18 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb, vol_name:cephfs) < "" Feb 1 05:06:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 54 KiB/s wr, 2 op/s Feb 1 05:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:06:18 localhost nova_compute[274317]: 2026-02-01 10:06:18.841 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:18 localhost podman[318717]: 2026-02-01 10:06:18.867798879 +0000 UTC m=+0.083668237 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:06:18 localhost podman[318717]: 2026-02-01 10:06:18.880253226 +0000 UTC m=+0.096122614 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 1 05:06:18 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:06:19 localhost nova_compute[274317]: 2026-02-01 10:06:19.113 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:19 localhost nova_compute[274317]: 2026-02-01 10:06:19.113 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 1 05:06:19 localhost nova_compute[274317]: 2026-02-01 10:06:19.129 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 1 05:06:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:06:19 localhost systemd[1]: tmp-crun.SxjE92.mount: Deactivated successfully. Feb 1 05:06:19 localhost podman[318736]: 2026-02-01 10:06:19.862304119 +0000 UTC m=+0.078429025 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 05:06:19 localhost podman[318736]: 2026-02-01 10:06:19.900963039 +0000 UTC m=+0.117087985 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 1 05:06:19 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:06:20 localhost nova_compute[274317]: 2026-02-01 10:06:20.092 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:20 localhost nova_compute[274317]: 2026-02-01 10:06:20.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 233 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 38 KiB/s wr, 1 op/s Feb 1 05:06:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_10:06:21 Feb 1 05:06:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 05:06:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 05:06:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['backups', 'volumes', 'images', 'manila_metadata', 'vms', '.mgr', 'manila_data'] Feb 1 05:06:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:06:21 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb", "format": "json"}]: dispatch Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 1 05:06:21 localhost ceph-33fac0b9-80c7-560f-918a-c92d3021ca1e-mgr-np0005604215-uhhqtv[278122]: 2026-02-01T10:06:21.842+0000 7f93ec23e640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb' of type subvolume Feb 1 05:06:21 localhost ceph-mgr[278126]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb' of type subvolume Feb 1 05:06:21 localhost ceph-mgr[278126]: log_channel(audit) log [DBG] : from='client.15654 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb", "force": true, "format": "json"}]: dispatch Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb, vol_name:cephfs) < "" Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:06:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00291850964893914 of space, bias 4.0, pg target 2.3231336805555554 quantized to 16 (current 16) Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb'' moved to trashcan Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 1 05:06:21 localhost ceph-mgr[278126]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5e0c14be-c6b0-4cd5-a1d1-18f3baab03fb, vol_name:cephfs) < "" Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:06:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:06:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 78 KiB/s wr, 3 op/s Feb 1 05:06:23 localhost nova_compute[274317]: 2026-02-01 10:06:23.885 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 61 KiB/s wr, 2 op/s Feb 1 05:06:25 localhost nova_compute[274317]: 2026-02-01 10:06:25.096 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:26 localhost nova_compute[274317]: 2026-02-01 10:06:26.001 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 61 KiB/s wr, 2 op/s Feb 1 05:06:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 75 KiB/s wr, 3 op/s Feb 1 05:06:28 localhost nova_compute[274317]: 2026-02-01 10:06:28.918 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:30 localhost podman[236852]: time="2026-02-01T10:06:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:06:30 localhost podman[236852]: @ - - [01/Feb/2026:10:06:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:06:30 localhost podman[236852]: @ - - [01/Feb/2026:10:06:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18337 "" "Go-http-client/1.1" Feb 1 05:06:30 localhost nova_compute[274317]: 2026-02-01 10:06:30.125 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 54 KiB/s wr, 2 op/s Feb 1 05:06:31 localhost openstack_network_exporter[239388]: ERROR 10:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:06:31 localhost openstack_network_exporter[239388]: Feb 1 05:06:31 localhost openstack_network_exporter[239388]: ERROR 10:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:06:31 localhost openstack_network_exporter[239388]: Feb 1 05:06:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 58 KiB/s wr, 3 op/s Feb 1 05:06:33 localhost nova_compute[274317]: 2026-02-01 10:06:33.976 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 18 KiB/s wr, 1 op/s Feb 1 05:06:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 1 05:06:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3217422935' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 1 05:06:34 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 1 05:06:34 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3217422935' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 1 05:06:35 localhost nova_compute[274317]: 2026-02-01 10:06:35.126 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 18 KiB/s wr, 1 op/s Feb 1 05:06:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 18 KiB/s wr, 1 op/s Feb 1 05:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:06:38 localhost podman[318758]: 2026-02-01 10:06:38.879486291 +0000 UTC m=+0.086237616 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1769056855, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, version=9.7, architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 1 05:06:38 localhost podman[318758]: 2026-02-01 10:06:38.89073739 +0000 UTC m=+0.097488725 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 1 05:06:38 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:06:38 localhost podman[318760]: 2026-02-01 10:06:38.93873779 +0000 UTC m=+0.137980643 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Feb 1 05:06:39 localhost nova_compute[274317]: 2026-02-01 10:06:39.003 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:39 localhost podman[318759]: 2026-02-01 10:06:39.021471547 +0000 UTC m=+0.225122316 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 05:06:39 localhost podman[318760]: 2026-02-01 10:06:39.045687369 +0000 UTC m=+0.244930212 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 1 05:06:39 localhost podman[318766]: 2026-02-01 10:06:39.056713391 +0000 UTC m=+0.251735642 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 05:06:39 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:06:39 localhost podman[318766]: 2026-02-01 10:06:39.067741643 +0000 UTC m=+0.262763904 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 05:06:39 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:06:39 localhost podman[318759]: 2026-02-01 10:06:39.150548002 +0000 UTC m=+0.354198761 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 1 05:06:39 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:06:39 localhost systemd[1]: tmp-crun.gV4HZP.mount: Deactivated successfully. Feb 1 05:06:40 localhost nova_compute[274317]: 2026-02-01 10:06:40.161 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.2 KiB/s wr, 0 op/s Feb 1 05:06:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:06:41.784 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:06:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:06:41.784 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:06:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:06:41.785 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:06:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.2 KiB/s wr, 0 op/s Feb 1 05:06:44 localhost nova_compute[274317]: 2026-02-01 10:06:44.041 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:06:45 localhost nova_compute[274317]: 2026-02-01 10:06:45.163 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:06:48 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:06:49 localhost nova_compute[274317]: 2026-02-01 10:06:49.045 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:06:49 localhost podman[318838]: 2026-02-01 10:06:49.867480319 +0000 UTC m=+0.084858255 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 1 05:06:49 localhost podman[318838]: 2026-02-01 10:06:49.877891391 +0000 UTC m=+0.095269317 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute) Feb 1 05:06:49 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:06:50 localhost nova_compute[274317]: 2026-02-01 10:06:50.195 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:50 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:50 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:06:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:06:50 localhost podman[318857]: 2026-02-01 10:06:50.861799153 +0000 UTC m=+0.077885558 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:06:50 localhost podman[318857]: 2026-02-01 10:06:50.899922966 +0000 UTC m=+0.116009331 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 05:06:50 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:06:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:06:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:06:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:06:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:06:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:06:51 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:06:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 1 05:06:52 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 1 05:06:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 1 05:06:52 localhost ceph-mon[298604]: log_channel(audit) log [INF] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:06:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 1 05:06:52 localhost ceph-mgr[278126]: [progress INFO root] update: starting ev b3f6b11c-ac50-459f-a02b-938037b5c87f (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:06:52 localhost ceph-mgr[278126]: [progress INFO root] complete: finished ev b3f6b11c-ac50-459f-a02b-938037b5c87f (Updating node-proxy deployment (+3 -> 3)) Feb 1 05:06:52 localhost ceph-mgr[278126]: [progress INFO root] Completed event b3f6b11c-ac50-459f-a02b-938037b5c87f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 1 05:06:52 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 1 05:06:52 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 1 05:06:52 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:06:53 localhost ceph-mon[298604]: from='mgr.34541 172.18.0.108:0/3988766652' entity='mgr.np0005604215.uhhqtv' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 1 05:06:53 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:06:54 localhost nova_compute[274317]: 2026-02-01 10:06:54.086 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:54 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:06:55 localhost nova_compute[274317]: 2026-02-01 10:06:55.215 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:06:55 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:06:56 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:06:56 localhost ceph-mgr[278126]: [progress INFO root] Writing back 50 completed events Feb 1 05:06:56 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 1 05:06:57 localhost ceph-mon[298604]: from='mgr.34541 ' entity='mgr.np0005604215.uhhqtv' Feb 1 05:06:58 localhost nova_compute[274317]: 2026-02-01 10:06:58.102 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:06:58 localhost nova_compute[274317]: 2026-02-01 10:06:58.103 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 1 05:06:58 localhost nova_compute[274317]: 2026-02-01 10:06:58.103 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 1 05:06:58 localhost nova_compute[274317]: 2026-02-01 10:06:58.123 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 1 05:06:58 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:06:59 localhost nova_compute[274317]: 2026-02-01 10:06:59.135 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:00 localhost podman[236852]: time="2026-02-01T10:07:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:07:00 localhost podman[236852]: @ - - [01/Feb/2026:10:07:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:07:00 localhost podman[236852]: @ - - [01/Feb/2026:10:07:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18346 "" "Go-http-client/1.1" Feb 1 05:07:00 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:00 localhost nova_compute[274317]: 2026-02-01 10:07:00.262 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:00 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:01 localhost nova_compute[274317]: 2026-02-01 10:07:01.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:07:01 localhost nova_compute[274317]: 2026-02-01 10:07:01.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:07:01 localhost openstack_network_exporter[239388]: ERROR 10:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:07:01 localhost openstack_network_exporter[239388]: Feb 1 05:07:01 localhost openstack_network_exporter[239388]: ERROR 10:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:07:01 localhost openstack_network_exporter[239388]: Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.099 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.125 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.126 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.127 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.127 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Auditing locally available compute resources for np0005604215.localdomain (node: np0005604215.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.128 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:07:02 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:07:02 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3163310464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.585 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:07:02 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.795 274321 WARNING nova.virt.libvirt.driver [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.797 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Hypervisor/Node resource view: name=np0005604215.localdomain free_ram=11467MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.797 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:07:02 localhost nova_compute[274317]: 2026-02-01 10:07:02.798 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:07:03 localhost nova_compute[274317]: 2026-02-01 10:07:03.106 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 1 05:07:03 localhost nova_compute[274317]: 2026-02-01 10:07:03.107 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Final resource view: name=np0005604215.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 1 05:07:03 localhost nova_compute[274317]: 2026-02-01 10:07:03.124 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.410 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.411 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.412 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.413 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.414 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.414 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.414 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.414 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.414 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.415 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.415 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.415 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.415 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.415 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.416 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.416 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.416 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceilometer_agent_compute[232200]: 2026-02-01 10:07:03.416 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 1 05:07:03 localhost ceph-mon[298604]: mon.np0005604215@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 1 05:07:03 localhost ceph-mon[298604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2249660806' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 1 05:07:03 localhost nova_compute[274317]: 2026-02-01 10:07:03.526 274321 DEBUG oslo_concurrency.processutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 1 05:07:03 localhost nova_compute[274317]: 2026-02-01 10:07:03.532 274321 DEBUG nova.compute.provider_tree [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed in ProviderTree for provider: d5eeed9a-e4d0-4244-8d4e-39e5c8263590 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 1 05:07:03 localhost nova_compute[274317]: 2026-02-01 10:07:03.546 274321 DEBUG nova.scheduler.client.report [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Inventory has not changed for provider d5eeed9a-e4d0-4244-8d4e-39e5c8263590 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 1 05:07:03 localhost nova_compute[274317]: 2026-02-01 10:07:03.549 274321 DEBUG nova.compute.resource_tracker [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Compute_service record updated for np0005604215.localdomain:np0005604215.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 1 05:07:03 localhost nova_compute[274317]: 2026-02-01 10:07:03.549 274321 DEBUG oslo_concurrency.lockutils [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:07:04 localhost nova_compute[274317]: 2026-02-01 10:07:04.184 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:04 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:05 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:05 localhost nova_compute[274317]: 2026-02-01 10:07:05.306 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:05 localhost nova_compute[274317]: 2026-02-01 10:07:05.550 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:07:05 localhost nova_compute[274317]: 2026-02-01 10:07:05.551 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:07:06 localhost nova_compute[274317]: 2026-02-01 10:07:06.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:07:06 localhost nova_compute[274317]: 2026-02-01 10:07:06.100 274321 DEBUG nova.compute.manager [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 1 05:07:06 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:07 localhost nova_compute[274317]: 2026-02-01 10:07:07.101 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:07:08 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:09 localhost nova_compute[274317]: 2026-02-01 10:07:09.219 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:07:09 localhost podman[319011]: 2026-02-01 10:07:09.884914 +0000 UTC m=+0.091086757 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, config_id=openstack_network_exporter, version=9.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, release=1769056855, com.redhat.component=ubi9-minimal-container) Feb 1 05:07:09 localhost podman[319012]: 2026-02-01 10:07:09.92649983 +0000 UTC m=+0.131744249 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 1 05:07:09 localhost podman[319012]: 2026-02-01 10:07:09.963694904 +0000 UTC m=+0.168939283 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Feb 1 05:07:09 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:07:09 localhost podman[319011]: 2026-02-01 10:07:09.983398656 +0000 UTC m=+0.189571443 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, config_id=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, vendor=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 1 05:07:10 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:07:10 localhost podman[319013]: 2026-02-01 10:07:10.050343113 +0000 UTC m=+0.252558598 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 1 05:07:10 localhost podman[319014]: 2026-02-01 10:07:10.098232149 +0000 UTC m=+0.295978965 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 05:07:10 localhost podman[319013]: 2026-02-01 10:07:10.118469368 +0000 UTC m=+0.320684883 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 1 05:07:10 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:07:10 localhost podman[319014]: 2026-02-01 10:07:10.134664789 +0000 UTC m=+0.332411575 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 1 05:07:10 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:07:10 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:10 localhost nova_compute[274317]: 2026-02-01 10:07:10.336 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:10 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:10 localhost systemd[1]: tmp-crun.PxHjc8.mount: Deactivated successfully. Feb 1 05:07:11 localhost nova_compute[274317]: 2026-02-01 10:07:11.100 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:07:12 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:14 localhost nova_compute[274317]: 2026-02-01 10:07:14.262 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:14 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:15 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:15 localhost nova_compute[274317]: 2026-02-01 10:07:15.375 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:16 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:18 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:19 localhost nova_compute[274317]: 2026-02-01 10:07:19.297 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:19 localhost nova_compute[274317]: 2026-02-01 10:07:19.972 274321 DEBUG oslo_service.periodic_task [None req-e04f1a51-10c3-4ec1-bb94-d41625939b91 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 1 05:07:20 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:20 localhost nova_compute[274317]: 2026-02-01 10:07:20.377 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:20 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6. Feb 1 05:07:20 localhost podman[319095]: 2026-02-01 10:07:20.865486534 +0000 UTC m=+0.080837827 container health_status 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 1 05:07:20 localhost podman[319095]: 2026-02-01 10:07:20.872901695 +0000 UTC m=+0.088252968 container exec_died 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 1 05:07:20 localhost systemd[1]: 3bbf98158f72b9f2627ab6262c9747e51e0453ba4964c8b85046158976642de6.service: Deactivated successfully. Feb 1 05:07:21 localhost ceph-mgr[278126]: [balancer INFO root] Optimize plan auto_2026-02-01_10:07:21 Feb 1 05:07:21 localhost ceph-mgr[278126]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 1 05:07:21 localhost ceph-mgr[278126]: [balancer INFO root] do_upmap Feb 1 05:07:21 localhost ceph-mgr[278126]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'backups', 'manila_metadata', 'images', 'manila_data'] Feb 1 05:07:21 localhost ceph-mgr[278126]: [balancer INFO root] prepared 0/10 changes Feb 1 05:07:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:07:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:07:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:07:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:07:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] scanning for idle connections.. Feb 1 05:07:21 localhost ceph-mgr[278126]: [volumes INFO mgr_util] cleaning up connections: [] Feb 1 05:07:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d. Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] _maybe_adjust Feb 1 05:07:21 localhost podman[319113]: 2026-02-01 10:07:21.898326702 +0000 UTC m=+0.078129213 container health_status a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 1 05:07:21 localhost ceph-mgr[278126]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002965129466778336 of space, bias 4.0, pg target 2.3602430555555554 quantized to 16 (current 16) Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 1 05:07:21 localhost podman[319113]: 2026-02-01 10:07:21.9137087 +0000 UTC m=+0.093511211 container exec_died a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: images, start_after= Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:07:21 localhost ceph-mgr[278126]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 1 05:07:21 localhost systemd[1]: a1727fb04d4959d73005ec0bc3202e0bb0e547dc9cc4d8bed82bddac8cdc638d.service: Deactivated successfully. Feb 1 05:07:22 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:23 localhost sshd[319136]: main: sshd: ssh-rsa algorithm is disabled Feb 1 05:07:23 localhost systemd-logind[761]: New session 78 of user zuul. Feb 1 05:07:23 localhost systemd[1]: Started Session 78 of User zuul. Feb 1 05:07:23 localhost python3[319158]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-3279-acd3-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 1 05:07:24 localhost nova_compute[274317]: 2026-02-01 10:07:24.326 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:24 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:25 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:25 localhost nova_compute[274317]: 2026-02-01 10:07:25.413 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:26 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:28 localhost systemd[1]: session-78.scope: Deactivated successfully. Feb 1 05:07:28 localhost systemd-logind[761]: Session 78 logged out. Waiting for processes to exit. Feb 1 05:07:28 localhost systemd-logind[761]: Removed session 78. Feb 1 05:07:28 localhost ovn_controller[152787]: 2026-02-01T10:07:28Z|00271|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory Feb 1 05:07:28 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:29 localhost nova_compute[274317]: 2026-02-01 10:07:29.374 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:30 localhost podman[236852]: time="2026-02-01T10:07:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 1 05:07:30 localhost podman[236852]: @ - - [01/Feb/2026:10:07:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155356 "" "Go-http-client/1.1" Feb 1 05:07:30 localhost podman[236852]: @ - - [01/Feb/2026:10:07:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18350 "" "Go-http-client/1.1" Feb 1 05:07:30 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:30 localhost nova_compute[274317]: 2026-02-01 10:07:30.449 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:30 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:31 localhost openstack_network_exporter[239388]: ERROR 10:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 1 05:07:31 localhost openstack_network_exporter[239388]: Feb 1 05:07:31 localhost openstack_network_exporter[239388]: ERROR 10:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 1 05:07:31 localhost openstack_network_exporter[239388]: Feb 1 05:07:32 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v757: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:34 localhost nova_compute[274317]: 2026-02-01 10:07:34.404 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:34 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:35 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:35 localhost nova_compute[274317]: 2026-02-01 10:07:35.485 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:36 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:38 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:39 localhost nova_compute[274317]: 2026-02-01 10:07:39.440 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:40 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:40 localhost nova_compute[274317]: 2026-02-01 10:07:40.522 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:40 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc. Feb 1 05:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5. Feb 1 05:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835. Feb 1 05:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603. Feb 1 05:07:40 localhost systemd[1]: tmp-crun.uqqYQS.mount: Deactivated successfully. Feb 1 05:07:40 localhost podman[319168]: 2026-02-01 10:07:40.885201663 +0000 UTC m=+0.084536602 container health_status c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 1 05:07:40 localhost podman[319161]: 2026-02-01 10:07:40.859177474 +0000 UTC m=+0.069249227 container health_status 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1769056855, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.7, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9/ubi-minimal, io.buildah.version=1.33.7) Feb 1 05:07:40 localhost podman[319169]: 2026-02-01 10:07:40.927674175 +0000 UTC m=+0.127375015 container health_status c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 1 05:07:40 localhost podman[319169]: 2026-02-01 10:07:40.933219558 +0000 UTC m=+0.132920328 container exec_died c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 1 05:07:40 localhost systemd[1]: c52b82d6176d722c60ca710eaef0fb255b9c854186667ee27d01230a5b524603.service: Deactivated successfully. Feb 1 05:07:40 localhost podman[319161]: 2026-02-01 10:07:40.94771913 +0000 UTC m=+0.157790923 container exec_died 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d, name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, build-date=2026-01-22T05:09:47Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, version=9.7, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'df76d86ebef250bd697134497aae1383ef1748f5be4a1e05ee2fab91e8becbf8'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc) Feb 1 05:07:40 localhost systemd[1]: 1e81c1b86182cdec2675267a0409f7c3101cf714634d5a55e5ebe8324ebb45fc.service: Deactivated successfully. Feb 1 05:07:41 localhost podman[319162]: 2026-02-01 10:07:41.028718191 +0000 UTC m=+0.231944811 container health_status 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 1 05:07:41 localhost podman[319162]: 2026-02-01 10:07:41.037840105 +0000 UTC m=+0.241066805 container exec_died 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 1 05:07:41 localhost podman[319168]: 2026-02-01 10:07:41.050127327 +0000 UTC m=+0.249462326 container exec_died c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b489452d0fbfc60248928d346c9d0d46319800de5f37757197e2af2ded932519'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 1 05:07:41 localhost systemd[1]: 412d8177703a1df6d718b121467f1d2144461403048e54909e20ac7820fdf5d5.service: Deactivated successfully. Feb 1 05:07:41 localhost systemd[1]: c2ff29759d63e369dfc47cb104999aeff3b911c38bef981d5b2e8d25093f2835.service: Deactivated successfully. Feb 1 05:07:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:07:41.785 158655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 1 05:07:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:07:41.786 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 1 05:07:41 localhost ovn_metadata_agent[158650]: 2026-02-01 10:07:41.787 158655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 1 05:07:42 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:44 localhost nova_compute[274317]: 2026-02-01 10:07:44.481 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:44 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v763: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:45 localhost ceph-mon[298604]: mon.np0005604215@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 1 05:07:45 localhost nova_compute[274317]: 2026-02-01 10:07:45.567 274321 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 1 05:07:46 localhost ceph-mgr[278126]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Feb 1 05:07:46 localhost sshd[319242]: main: sshd: ssh-rsa algorithm is disabled Feb 1 05:07:46 localhost systemd-logind[761]: New session 79 of user zuul. Feb 1 05:07:46 localhost systemd[1]: Started Session 79 of User zuul.